id
stringlengths 47
47
| text
stringlengths 426
671k
| keywords_count
int64 1
10
| codes_count
int64 2
4.68k
|
---|---|---|---|
<urn:uuid:a42971e3-6316-4a4b-b05f-388e48b4808d> | Frankly speaking, you cannot create a Linux partition larger than 2 TB using the fdisk command. The fdisk won't create partitions larger than 2 TB. This is fine for desktop and laptop users, but on server you need a large partition. For example, you cannot create 3TB or 4TB partition size (RAID based) using the fdisk command. It will not allow you to create a partition that is greater than 2TB. In this tutorial, you will learn more about creating Linux filesystems greater than 2 Terabytes to support enterprise grade operation under any Linux distribution.
To solve this problem use GNU parted command with GPT. It supports Intel EFI/GPT partition tables. Partition Table (GPT) is a standard for the layout of the partition table on a physical hard disk. It is a part of the Extensible Firmware Interface (EFI) standard proposed by Intel as a replacement for the outdated PC BIOS, one of the few remaining relics of the original IBM PC. EFI uses GPT where BIOS uses a Master Boot Record (MBR).
(Fig.01: Diagram illustrating the layout of the GUID Partition Table scheme. Each logical block (LBA) is 512 bytes in size. LBA addresses that are negative indicate position from the end of the volume, with −1 being the last addressable block. Imaged Credit Wikipedia)
Linux GPT Kernel Support
EFI GUID Partition support works on both 32bit and 64bit platforms. You must include GPT support in kernel in order to use GPT. If you don't include GPT support in Linux kernelt, after rebooting the server, the file system will no longer be mountable or the GPT table will get corrupted. By default Redhat Enterprise Linux / CentOS comes with GPT kernel support. However, if you are using Debian or Ubuntu Linux, you need to recompile the kernel. Set CONFIG_EFI_PARTITION to y to compile this feature.
File Systems Partition Types [*] Advanced partition selection [*] EFI GUID Partition support (NEW) ....
Find Out Current Disk Size
Type the following command:
# fdisk -l /dev/sdb
Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdb doesn't contain a valid partition table
Linux Create 3TB partition size
To create a partition start GNU parted as follows:
# parted /dev/sdb
GNU Parted 2.3 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted)
Creates a new GPT disklabel i.e. partition table:
(parted) mklabel gpt
Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue? Yes/No? yes (parted)
Next, set the default unit to TB, enter:
(parted) unit TB
To create a 3TB partition size, enter:
(parted) mkpart primary 0 0
(parted) mkpart primary 0.00TB 3.00TB
To print the current partitions, enter:
Model: ATA ST33000651AS (scsi) Disk /dev/sdb: 3.00TB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 0.00TB 3.00TB 3.00TB ext4 primary
Quit and save the changes, enter:
Information: You may need to update /etc/fstab.
Use the mkfs.ext3 or mkfs.ext4 command to format the file system, enter:
# mkfs.ext3 /dev/sdb1
# mkfs.ext4 /dev/sdb1
mkfs.ext4 /dev/sdb1 mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 183148544 inodes, 732566272 blocks 36628313 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 22357 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848, 512000000, 550731776, 644972544 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 31 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
Type the following commands to mount /dev/sdb1, enter:
# mkdir /data
# mount /dev/sdb1 /data
# df -H
Filesystem Size Used Avail Use% Mounted on /dev/sdc1 16G 819M 14G 6% / tmpfs 1.6G 0 1.6G 0% /lib/init/rw udev 1.6G 123k 1.6G 1% /dev tmpfs 1.6G 0 1.6G 0% /dev/shm /dev/sdb1 3.0T 211M 2.9T 1% /data
Make sure you replace /dev/sdb1 with actual RAID or Disk name or Block Ethernet device such as /dev/etherd/e0.0. Do not forget to update /etc/fstab, if necessary. Also note that booting from a GPT volume requires support in your BIOS / firmware. This is not supported on non-EFI platforms. I suggest you boot server from another disk such as IDE / SATA / SSD disk and store data on /data.
- How Basic Disks and Volumes Work (little outdated but good to understand basic concept)
- GUID Partition Table from the Wikipedia
- man pages parted
Updated for accuracy!
- 30 Handy Bash Shell Aliases For Linux / Unix / Mac OS X
- Top 30 Nmap Command Examples For Sys/Network Admins
- 25 PHP Security Best Practices For Sys Admins
- 20 Linux System Monitoring Tools Every SysAdmin Should Know
- 20 Linux Server Hardening Security Tips
- Linux: 20 Iptables Examples For New SysAdmins
- Top 20 OpenSSH Server Best Security Practices
- Top 20 Nginx WebServer Best Security Practices
- 20 Examples: Make Sure Unix / Linux Configuration Files Are Free From Syntax Errors
- 15 Greatest Open Source Terminal Applications Of 2012
- My 10 UNIX Command Line Mistakes
- Top 10 Open Source Web-Based Project Management Software
- Top 5 Email Client For Linux, Mac OS X, and Windows Users
- The Novice Guide To Buying A Linux Laptop | 1 | 7 |
<urn:uuid:759ff0b9-9458-45d0-8deb-368c01089695> | Opportunities and Challenges in High Pressure Processing of Foods
By Rastogi, N K; Raghavarao, K S M S; Balasubramaniam, V M; Niranjan, K; Knorr, D
Consumers increasingly demand convenience foods of the highest quality in terms of natural flavor and taste, and which are free from additives and preservatives. This demand has triggered the need for the development of a number of nonthermal approaches to food processing, of which high-pressure technology has proven to be very valuable. A number of recent publications have demonstrated novel and diverse uses of this technology. Its novel features, which include destruction of microorganisms at room temperature or lower, have made the technology commercially attractive. Enzymes and even spore forming bacteria can be inactivated by the application of pressure-thermal combinations, This review aims to identify the opportunities and challenges associated with this technology. In addition to discussing the effects of high pressure on food components, this review covers the combined effects of high pressure processing with: gamma irradiation, alternating current, ultrasound, and carbon dioxide or anti-microbial treatment. Further, the applications of this technology in various sectors-fruits and vegetables, dairy, and meat processing-have been dealt with extensively. The integration of high-pressure with other matured processing operations such as blanching, dehydration, osmotic dehydration, rehydration, frying, freezing / thawing and solid- liquid extraction has been shown to open up new processing options. The key challenges identified include: heat transfer problems and resulting non-uniformity in processing, obtaining reliable and reproducible data for process validation, lack of detailed knowledge about the interaction between high pressure, and a number of food constituents, packaging and statutory issues.
Keywords high pressure, food processing, non-thermal processing
Consumers demand high quality and convenient products with natural flavor and taste, and greatly appreciate the fresh appearance of minimally processed food. Besides, they look for safe and natural products without additives such as preservatives and humectants. In order to harmonize or blend all these demands without compromising the safety of the products, it is necessary to implement newer preservation technologies in the food industry. Although the fact that “high pressure kills microorganisms and preserves food” was discovered way back in 1899 and has been used with success in chemical, ceramic, carbon allotropy, steel/alloy, composite materials and plastic industries for decades, it was only in late 1980′s that its commercial benefits became available to the food processing industries. High pressure processing (HPP) is similar in concept to cold isostatic pressing of metals and ceramics, except that it demands much higher pressures, faster cycling, high capacity, and sanitation (Zimmerman and Bergman, 1993; Mertens and Deplace, 1993). Hite (1899) investigated the application of high pressure as a means of preserving milk, and later extended the study to preserve fruits and vegetables (Hite, Giddings, and Weakly, 1914). It then took almost eighty years for Japan to re- discover the application of high-pressure in food processing. The use of this technology has come about so quickly that it took only three years for two Japanese companies to launch products, which were processed using this technology. The ability of high pressure to inactivate microorganisms and spoilage catalyzing enzymes, whilst retaining other quality attributes, has encouraged Japanese and American food companies to introduce high pressure processed foods in the market (Mermelstein, 1997; Hendrickx, Ludikhuyze, Broeck, and Weemaes, 1998). The first high pressure processed foods were introduced to the Japanese market in 1990 by Meidi-ya, who have been marketing a line of jams, jellies, and sauces packaged and processed without application of heat (Thakur and Nelson, 1998). Other products include fruit preparations, fruit juices, rice cakes, and raw squid in Japan; fruit juices, especially apple and orange juice, in France and Portugal; and guacamole and oysters in the USA (Hugas, Garcia, and Monfort, 2002). In addition to food preservation, high- pressure treatment can result in food products acquiring novel structure and texture, and hence can be used to develop new products (Hayashi, 1990) or increase the functionality of certain ingredients. Depending on the operating parameters and the scale of operation, the cost of highpressure treatment is typically around US$ 0.05-0.5 per liter or kilogram, the lower value being comparable to the cost of thermal processing (Thakur and Nelson, 1998; Balasubramaniam, 2003).
The non-availability of suitable equipment encumbered early applications of high pressure. However, recent progress in equipment design has ensured worldwide recognition of the potential for such a technology in food processing (Could, 1995; Galazka and Ledward, 1995; Balci and Wilbey, 1999). Today, high-pressure technology is acknowledged to have the promise of producing a very wide range of products, whilst simultaneously showing potential for creating a new generation of value added foods. In general, high-pressure technology can supplement conventional thermal processing for reducing microbial load, or substitute the use of chemical preservatives (Rastogi, Subramanian, and Raghavarao, 1994).
Over the past two decades, this technology has attracted considerable research attention, mainly relating to: i) the extension of keeping quality (Cheftel, 1995; Farkas and Hoover, 2001), ii) changing the physical and functional properties of food systems (Cheftel, 1992), and iii) exploiting the anomalous phase transitions of water under extreme pressures, e.g. lowering of freezing point with increasing pressures (Kalichevsky, Knorr, and Lillford, 1995; Knorr, Schlueter, and Heinz, 1998). The key advantages of this technology can be summarized as follows:
1. it enables food processing at ambient temperature or even lower temperatures;
2. it enables instant transmittance of pressure throughout the system, irrespective of size and geometry, thereby making size reduction optional, which can be a great advantage;
3. it causes microbial death whilst virtually eliminating heat damage and the use of chemical preservatives/additives, thereby leading to improvements in the overall quality of foods; and
4. it can be used to create ingredients with novel functional properties.
The effect of high pressure on microorganisms and proteins/ enzymes was observed to be similar to that of high temperature. As mentioned above, high pressure processing enables transmittance of pressure rapidly and uniformly throughout the food. Consequently, the problems of spatial variations in preservation treatments associated with heat, microwave, or radiation penetration are not evident in pressure-processed products. The application of high pressure increases the temperature of the liquid component of the food by approximately 3C per 100 MPa. If the food contains a significant amount of fat, such as butter or cream, the temperature rise is greater (8-9C/100 MPa) (Rasanayagam, Balasubramaniam, Ting, Sizer, Bush, and Anderson, 2003). Foods cool down to their original temperature on decompression if no heat is lost to (or gained from) the walls of the pressure vessel during the holding stage. The temperature distribution during the pressure-holding period can change depending on heat transfer across the walls of the pressure vessel, which must be held at the desired temperature for achieving truly isothermal conditions. In the case of some proteins, a gel is formed when the rate of compression is slow, whereas a precipitate is formed when the rate is fast. High pressure can cause structural changes in structurally fragile foods containing entrapped air such as strawberries or lettuce. Cell deformation and cell damage can result in softening and cell serum loss. Compression may also shift the pH depending on the imposed pressure. Heremans (1995) indicated a lowering of pH in apple juice by 0.2 units per 100 MPa increase in pressure. In combined thermal and pressure treatment processes, Meyer (2000) proposed that the heat of compression could be used effectively, since the temperature of the product can be raised from 70-90C to 105-120C by a compression to 700 MPa, and brought back to the initial temperature by decompression.
As a thermodynamic parameter, pressure has far-reaching effects on the conformation of macromolecules, the transition temperature of lipids and water, and a number of chemical reactions (Cheftel, 1992; Tauscher, 1995). Phenomena that are accompanied by a decrease in volume are enhanced by pressure, and vice-versa (principle of Le Chatelier). Thus, under pressure, reaction equilibriums are shifted towards the most compact state, and the reaction rate constant is increased or decreased, depending on whether the “activation volume” of the reaction (i.e. volume of the activation complex less volume of reactants) is negative or positive. It is likely that pressure a\lso inhibits the availability of the activation energy required for some reactions, by affecting some other energy releasing enzymatic reactions (Farr, 1990). The compression energy of 1 litre of water at 400 MPa is 19.2 kJ, as compared to 20.9 kJ for heating 1 litre of water from 20 to 25C. The low energy levels involved in pressure processing may explain why covalent bonds of food constituents are usually less affected than weak interactions. Pressure can influence most biochemical reactions, since they often involve change in volume. High pressure controls certain enzymatic reactions. The effect of high pressure on protein/enzyme is reversible unlike temperature, in the range 100-400 MPa and is probably due to conformational changes and sub-unit dissociation and association process (Morild, 1981).
For both the pasteurization and sterilization processes, a combined treatment of high pressure and temperature are frequently considered to be most appropriate (Farr, 1990; Patterson, Quinn, Simpson, and Gilmour, 1995). Vegetative cells, including yeast and moulds, are pressure sensitive, i.e. they can be inactivated by pressures of ~300-600 MPa (Knorr, 1995; Patterson, Quinn, Simpson, and Gilmour, 1995). At high pressures, microbial death is considered to be due to permeabilization of cell membrane. For instance, it was observed that in the case of Saccharomyces cerevasia, at pressures of about 400 MPa, the structure and cytoplasmic organelles were grossly deformed and large quantities of intracellular material leaked out, while at 500 MPa, the nucleus could no longer be recognized, and a loss of intracellular material was almost complete (Farr, 1990). Changes that are induced in the cell morphology of the microorganisms are reversible at low pressures, but irreversible at higher pressures where microbial death occurs due to permeabilization of the cell membrane. An increase in process temperature above ambient temperature, and to a lesser extent, a decrease below ambient temperature, increases the inactivation rates of microorganisms during high pressure processing. Temperatures in the range 45 to 50C appear to increase the rate of inactivation of pathogens and spoilage microorganisms. Preservation of acid foods (pH ≤ 4.6) is, therefore, the most obvious application of HPP as such. Moreover, pasteurization can be performed even under chilled conditions for heat sensitive products. Low temperature processing can help to retain nutritional quality and functionality of raw materials treated and could allow maintenance of low temperature during post harvest treatment, processing, storage, transportation, and distribution periods of the life cycle of the food system (Knorr, 1995).
Bacterial spores are highly pressure resistant, since pressures exceeding 1200 MPa may be needed for their inactivation (Knorr, 1995). The initiation of germination or inhibition of germinated bacterial spores and inactivation of piezo-resistive microorganisms can be achieved in combination with moderate heating or other pretreatments such as ultrasound. Process temperature in the range 90-121C in conjunction with pressures of 500-800 MPa have been used to inactivate spores forming bacteria such as Clostridium botulinum. Thus, sterilization of low-acid foods (pH > 4.6), will most probably rely on a combination of high pressure and other forms of relatively mild treatments.
High-pressure application leads to the effective reduction of the activity of food quality related enzymes (oxidases), which ensures high quality and shelf stable products. Sometimes, food constituents offer piezo-resistance to enzymes. Further, high pressure affects only non-covalent bonds (hydrogen, ionic, and hydrophobic bonds), causes unfolding of protein chains, and has little effect on chemical constituents associated with desirable food qualities such as flavor, color, or nutritional content. Thus, in contrast to thermal processing, the application of high-pressure causes negligible impairment of nutritional values, taste, color flavor, or vitamin content (Hayashi, 1990). Small molecules such as amino acids, vitamins, and flavor compounds remain unaffected by high pressure, while the structure of the large molecules such as proteins, enzymes, polysaccharides, and nucleic acid may be altered (Balci and Wilbey, 1999).
High pressure reduces the rate of browning reaction (Maillard reaction). It consists of two reactions, condensation reaction of amino compounds with carbonyl compounds, and successive browning reactions including metanoidin formation and polymerization processes. The condensation reaction shows no acceleration by high pressure (5-50 MPa at 50C), because it suppresses the generation of stable free radicals derived from melanoidin, which are responsible for the browning reaction (Tamaoka, Itoh, and Hayashi, 1991). Gels induced by high pressure are found to be more glossy and transparent because of rearrangement of water molecules surrounding amino acid residues in a denatured state (Okamoto, Kawamura, and Hayashi, 1990).
The capability and limitations of HPP have been extensively reviewed (Thakur and Nelson, 1998; Smelt, 1998;Cheftal, 1995; Knorr, 1995; Fair, 1990; Tiwari, Jayas, and Holley, 1999; Cheftel, Levy, and Dumay, 2000; Messens, Van Camp, and Huyghebaert, 1997; Ontero and Sanz, 2000; Hugas, Garriga, and Monfort, 2002; Lakshmanan, Piggott,and Paterson, 2003; Balasubramaniam, 2003; Matser, Krebbers, Berg, and Bartels, 2004; Hogan, Kelly, and Sun, 2005; Mor-Mur and Yuste, 2005). Many of the early reviews primarily focused on the microbial efficacy of high-pressure processing. This review comprehensively covers the different types of products processed by highpressure technology alone or in combination with the other processes. It also discusses the effect of high pressure on food constituents such as enzymes and proteins. The applications of this technology in fruits and vegetable, dairy and animal product processing industries are covered. The effects of combining high- pressure treatment with other processing methods such as gamma- irradiation, alternating current, ultrasound, carbon dioxide, and anti microbial peptides have also been described. Special emphasis has been given to opportunities and challenges in high pressure processing of foods, which can potentially be explored and exploited.
EFFECT OF HIGH PRESSURE ON ENZYMES AND PROTEINS
Enzymes are a special class of proteins in which biological activity arises from active sites, brought together by a three- dimensional configuration of molecule. The changes in active site or protein denaturation can lead to loss of activity, or changes the functionality of the enzymes (Tsou, 1986). In addition to conformational changes, enzyme activity can be influenced by pressure-induced decompartmentalization (Butz, Koller, Tauscher, and Wolf, 1994; Gomes and Ledward, 1996). Pressure induced damage of membranes facilitates enzymesubstrate contact. The resulting reaction can either be accelerated or retarded by pressure (Butz, Koller, Tauscher, and Wolf, 1994; Gomes and Ledward, 1996; Morild, 1981). Hendrickx, Ludikhuy ze, Broeck, and Weemaes ( 1998) and Ludikhuyze, Van Loey, and Indrawati et al. (2003) reviewed the combined effect of pressure and temperature on enzymes related to the ity of fruits and vegetables, which comprises of kinetic information as well as process engineering aspects.
Pectin methylesterase (PME) is an enzyme, which normally tends to lower the viscosity of fruits products and adversely affect their texture. Hence, its inactivation is a prerequisite for the preservation of such products. Commercially, fruit products containing PME (e.g. orange juice and tomato products) are heat pasteurized to inactivate PME and prolong shelf life. However, heating can deteriorate the sensory and nutritional quality of the products. Basak and Ramaswamy (1996) showed that the inactivation of PME in orange juice was dependent on pressure level, pressure-hold time, pH, and total soluble solids. An instantaneous pressure kill was dependent only on pressure level and a secondary inactivation effect dependent on holding time at each pressure level. Nienaber and Shellhammer (2001) studied the kinetics of PME inactivation in orange juice over a range of pressures (400-600 MPa) and temperatures (25-5O0C) for various process holding times. PME inactivation followed a firstorder kinetic model, with a residual activity of pressure-resistant enzyme. Calculated D-values ranged from 4.6 to 117.5 min at 600 MPa/50C and 400 MPa/25C, respectively. Pressures in excess of 500 MPa resulted in sufficiently faster inactivation rates for economic viability of the process. Binh, Van Loey, Fachin, Verlent, Indrawati, and Hendrickx (2002a, 2002b) studied the kinetics of inactivation of strawberry PME. The combined effect of pressure and temperature on inactivation kinetics followed a fractional-conversion model. Purified strawberry PME was more stable toward high-pressure treatments than PME from oranges and bananas. Ly-Nguyen, Van Loey, Fachin, Verlent, Hendrickx (2002) showed that the inactivation of the banana PME enzyme during heating at temperature between 65 and 72.5C followed first order kinetics and the effect of pressure treatment of 600-700 MPa at 10C could be described using a fractionalconversion model. Stoforos, Crelier, Robert, and Taoukis (2002) demonstrated that under ambient pressure, tomato PME inactivation rates increased with temperature, and the highest rate was obtained at 75C. The inactivation rates were dramatically reduced as soon as the essing pressure was raised beyond 75C. High inactivation rates were obtained at a pressure higher than 700 MPa. Riahi and Ramaswamy (2003) studied high- pressure inactivation kinetics of PME isolated from a variety of sources and showed that PME from a microbial source was more resistant \to pressure inactivation than from orange peel. Almost a full decimal reduction in activity of commercial PME was achieved at 400 MPa within 20 min.
Verlent, Van Loey, Smout, Duvetter, Nguyen, and Hendrickx (2004) indicated that the optimal temperature for tomato pectinmethylesterase was shifted to higher values at elevated pressure compared to atmospheric pressure, creating the possibilities for rheology improvements by the application of high pressure.
Castro, Van Loey, Saraiva, Smout, and Hendrickx (2006) accurately described the inactivation of the labile fraction under mild-heat and high-pressure conditions by a fractional conversion model, while a biphasic model was used to estimate the inactivation rate constant of both the fractions at more drastic conditions of temperature/ pressure (10-64C, 0.1-800 MPa). At pressures lower than 300 MPa and temperatures higher than 54C, an antagonistic effect of pressure and temperature was observed.
Balogh, Smout, Binh, Van Loey, and Hendrickx (2004) observed the inactivation kinetics of carrot PME to follow first order kinetics over a range of pressure and temperature (650800 MPa, 10-40C). Enzyme stability under heat and pressure was reported to be lower in carrot juice and purified PME preparations than in carrots.
The presence of pectinesterase (PE) reduces the quality of citrus juices by destabilization of clouds. Generally, the inactivation of the enzyme is accomplished by heat, resulting in a loss of fresh fruit flavor in the juice. High pressure processing can be used to bypass the use of extreme heat for the processing of fruit juices. Goodner, Braddock, and Parish (1998) showed that the higher pressures (>600 MPa) caused instantaneous inactivation of the heat labile form of the enzyme but did not inactivate the heat stable form of PE in case of orange and grapefruit juices. PE activity was totally lost in orange juice, whereas complete inactivation was not possible in case of grapefruit juices. Orange juice pressurized at 700 MPa for l min had no cloud loss for more than 50 days. Broeck, Ludikhuyze, Van Loey, and Hendrickx (2000) studied the combined pressure-temperature inactivation of the labile fraction of orange PE over a range of pressure (0.1 to 900 MPa) and temperature (15 to 65C). Pressure and temperature dependence of the inactivation rate constants of the labile fraction was quantified using the well- known Eyring and Arrhenius relations. The stable fraction was inactivated at a temperature higher than 75C. Acidification (pH 3.7) enhanced the thermal inactivation of the stable fraction, whereas the addition of Ca^sup ++^ ions (IM) suppressed inactivation. At elevated pressure (up to 900 MPa), an antagonistic effect of pressure and temperature on inactivation of the stable fraction was observed. Ly-Nguyen, Van Loey, Smout, Ozean, Fachin, Verlent, Vu- Truong, Duvetter, and Hendrickx (2003) investigated the combined heat and pressure treatments on the inactivation of purified carrot PE, which followed a fractional-conversion model. The thermally stable fraction of the enzyme could not be inactivated. At a lower pressure (<300 MPa) and higher temperature (>50C), an antagonistic effect of pressure and heat was observed.
High pressures induced conformational changes in polygalacturonase (PG) causing reduced substrate binding affinity and enzyme inactivation. Eun, Seok, and Wan ( 1999) studied the effect of high-pressure treatment on PG from Chinese cabbage to prevent the softening and spoilage of plant-based foods such as kimchies without compromising quality. PG was inactivated by the application of pressure higher than 200 MPa for l min. Fachin, Van Loey, Indrawati, Ludikhuyze, and Hendrickx (2002) investigated the stability of tomato PG at different temperatures and pressures. The combined pressure temperature inactivation (300-600 MPa/50 -50C) of tomato PG was described by a fractional conversion model, which points to Ist-order inactivation kinetics of a pressure-sensitive enzyme fraction and to the occurrence of a pressure-stable PG fraction. Fachin, Smout, Verlent, Binh, Van Loey, and Hendrickx (2004) indicated that in the combination of pressure-temperature (5- 55C/100-600 MPa), the inactivation of the heat labile portion of purified tomato PG followed first order kinetics. The heat stable fraction of the enzyme showed pressure stability very similar to that of heat labile portion.
Peelers, Fachin, Smout, Van Loey, and Hendrickx (2004) demonstrated that effect of high-pressure was identical on heat stable and heat labile fractions of tomato PG. The isoenzyme of PG was detected in thermally treated (140C for 5 min) tomato pieces and tomato juice, whereas, no PG was found in pressure treated tomato juice or pieces.
Verlent, Van Loey, Smout, Duvetter, and Hendrickx (2004) investigated the effect of nigh pressure (0.1 and 500 MPa) and temperature (25-80C) on purified tomato PG. At atmospheric pressure, the optimum temperature for enzyme was found to be 55-60C and it decreased with an increase in pressure. The enzyme activity was reported to decrease with an increase in pressure at a constant temperature.
Shook, Shellhammer, and Schwartz (2001) studied the ability of high pressure to inactivate lipoxygenase, PE and PG in diced tomatoes. Processing conditions used were 400,600, and 800 MPa for 1, 3, and 5 min at 25 and 45C. The magnitude of the applied pressure had a significant effect in inactivating lipoxygenase and PG, with complete loss of activity occurring at 800 MPa. PE was very resistant to the pressure treatment.
Polyphenoloxidase and Pemxidase
Polyphenoloxidase (PPO) and peroxidase (POD), the enzymes responsible for color and flavor loss, can be selectively inactivated by a combined treatment of pressure and temperature. Gomes and Ledward (1996) studied the effects of pressure treatment (100-800 MPa for 1-20 min) on commercial PPO enzyme available from mushrooms, potatoes, and apples. Castellari, Matricardi, Arfelli, Rovere, and Amati ( 1997) demonstrated that there was a limited inactivation of grape PPO using pressures between 300 and 600 MPa. At 900 MPa, a low level of PPO activity was apparent. In order to reach complete inactivation, it may be necessary to use high- pressure processing treatments in conjunction with a mild thermal treatment (40-50C). Weemaes, Ludikhuyze, Broeck, and Hendrickx (1998) studied the pressure stabilities of PPO from apple, avocados, grapes, pears, and plums at pH 6-7. These PPO differed in pressure stability. Inactivation of PPO from apple, grape, avocado, and pear at room temperature (25C) became noticeable at approximately 600, 700, 800 and 900 MPa, respectively, and followed first-order kinetics. Plum PPO was not inactivated at room temperature by pressures up to 900 MPa. Rastogi, Eshtiaghi, and Knorr (1999) studied the inactivation effects of high hydrostatic pressure treatment (100-600 MPa) combined with heat treatment (0-60C) on POD and PPO enzyme, in order to develop high pressure-processed red grape juice having stable shelf-life. The studies showed that the lowest POD (55.75%) and PPO (41.86%) activities were found at 60C, with pressure at 600 and 100 MPa, respectively. MacDonald and Schaschke (2000) showed that for PPO, both temperature and pressure individually appeared to have similar effects, whereas the holding time was not significant. On the other hand, in case of POD, temperature as well as interaction between temperature and holding time had the greatest effect on activity. Namkyu, Seunghwan, and Kyung (2002) showed that mushroom PPO was highly pressure stable. Exposure to 600 MPa for 10 min reduced PPO activity by 7%; further exposure had no denaturing effect. Compression for 10 and 20 min up to 800 MPa, reduced activity by 28 and 43%, respectively.
Rapeanu, Van Loey, Smout, and Hendrickx (2005) indicated that the thermal and/or high-pressure inactivation of grape PPO followed first order kinetics. A third degree polynomial described the temperature/pressure dependence of the inactivation rate constants. Pressure and temperature were reported to act synergistically, except in the high temperature (≥45C)-low pressure (≥300 MPa) region where an antagonistic effect was observed.
Gomes, Sumner, and Ledward (1997) showed that the application of increasing pressures led to a gradual reduction in papain enzyme activity. A decrease in activity of 39% was observed when the enzyme solution was initially activated with phosphate buffer (pH 6.8) and subjected to 800 MPa at ambient temperature for 10 min, while 13% of the original activity remained when the enzyme solution was treated at 800 MPa at 60C for 10 min. In Tris buffer at pH 6.8 after treatment at 800 MPa and 20C, papain activity loss was approximately 24%. The inactivation of the enzyme is because of induced change at the active site causing loss of activity without major conformational changes. This loss of activity was due to oxidation of the thiolate ion present at the active site.
Weemaes, Cordt, Goossens, Ludikhuyze, Hendrickx, Heremans, and Tobback (1996) studied the effects of pressure and temperature on activity of 3 different alpha-amylases from Bacillus subtilis, Bacillus amyloliquefaciens, and Bacillus licheniformis. The changes in conformation of Bacillus licheniformis, Bacillus subtilis, and Bacillus amyloliquefaciens amylases occurred at pressures of 110, 75, and 65 MPa, respectively. Bacillus licheniformis amylase was more stable than amylases from Bacillus subtilis and Bacillus amyloliquefaciens to the combined heat/pressure treatment.
Riahi and Ramaswamy (2004) demonstrated that pressure inactivation of amylase in apple juice was significantly (P < 0.01 ) influenced by pH, pressure, holding time, and temperature. The inactivation was described using a bi-phasic model. The application of high pressure was sh\own to completely inactivate amylase. The importance of the pressure pulse and pressure hold approach for inactivation of amylase was also demonstrated.
High pressure denatures protein depending on the protein type, processing conditions, and the applied pressure. During the process of denaturation, the proteins may dissolve or precipitate on the application of high pressure. These changes are generally reversible in the pressure range 100-300 MPa and irreversible for the pressures higher than 300 MPa. Denaturation may be due to the destruction of hydrophobic and ion pair bonds, and unfolding of molecules. At higher pressure, oligomeric proteins tend to dissociate into subunits becoming vulnerable to proteolysis. Monomeric proteins do not show any changes in proteolysis with increase in pressure (Thakur and Nelson, 1998).
High-pressure effects on proteins are related to the rupture on non-covalent interactions within protein molecules, and to the subsequent reformation of intra and inter molecular bonds within or between the molecules. Different types of interactions contribute to the secondary, tertiary, and quaternary structure of proteins. The quaternary structure is mainly held by hydrophobic interactions that are very sensitive to pressure. Significant changes in the tertiary structure are observed beyond 200 MPa. However, a reversible unfolding of small proteins such as ribonuclease A occurs at higher pressures (400 to 800 MPa), showing that the volume and compressibility changes during denaturation are not completely dominated by the hydrophobic effect. Denaturation is a complex process involving intermediate forms leading to multiple denatured products. secondary structure changes take place at a very high pressure above 700 MPa, leading to irreversible denaturation (Balny and Masson, 1993).
Figure 1 General scheme for pressure-temperature phase diagram of proteins, (from Messens, Van Camp, and Huyghebaert, 1997).
When the pressure increases to about 100 MPa, the denaturation temperature of the protein increases, whereas at higher pressures, the temperature of denaturation usually decreases. This results in the elliptical phase diagram of native denatured protein shown in Fig. 1. A practical consequence is that under elevated pressures, proteins denature usually at room temperature than at higher temperatures. The phase diagram also specifies the pressure- temperature range in which the protein maintains its native structure. Zone III specifies that at high temperatures, a rise in denaturation temperature is found with increasing pressure. Zone II indicates that below the maximum transition temperature, protein denaturation occurs at the lower temperatures under higher pressures. Zone III shows that below the temperature corresponding to the maximum transition pressure, protein denaturation occurs at lower pressures using lower temperatures (Messens, Van Camp, and Huyghebaert, 1997).
The application of high pressure has been shown to destabilize casein micelles in reconstituted skim milk and the size distribution of spherical casein micelles decrease from 200 to 120 nm; maximum changes have been reported to occur between 150-400 MPa at 20C. The pressure treatment results in reduced turbidity and increased lightness, which leads to the formation of a virtually transparent skim milk (Shibauchi, Yamamoto, and Sagara, 1992; Derobry, Richard, and Hardy, 1994). The gels produced from high-pressure treated skim milk showed improved rigidity and gel breaking strength (Johnston, Austin, and Murphy, 1992). Garcia, Olano, Ramos, and Lopez (2000) showed that the pressure treatment at 25C considerably reduced the micelle size, while pressurization at higher temperature progressively increased the micelle dimensions. Anema, Lowe, and Stockmann (2005) indicated that a small decrease in the size of casein micelles was observed at 100 MPa, with slightly greater effects at higher temperatures or longer pressure treatments. At pressure >400 MPa, the casein micelles disintegrated. The effect was more rapid at higher temperatures although the final size was similar in all samples regardless of the pressure or temperature. At 200 MPa and 1O0C, the casein micelle size decreased slightly on heating, whereas, at higher temperatures, the size increased as a result of aggregation. Huppertz, Fox, and Kelly (2004a) showed that the size of casein micelles increased by 30% upon high-pressure treatment of milk at 250 MPa and micelle size dropped by 50% at 400 or 600 MPa.
Huppertz, Fox, and Kelly (2004b) demonstrated that the high- pressure treatment of milk at 100-600 MPa resulted in considerable solubilization of alphas 1- and beta-casein, which may be due to the solubilization of colloidal calcium phosphate and disruption of hydrophobic interactions. On storage of pressure, treated milk at 5C dissociation of casein was largely irreversible, but at 20C, considerable re-association of casein was observed. The hydration of the casein micelles increased on pressure treatment (100-600 MPa) due to induced interactions between caseins and whey proteins. Pressure treatment increased levels of alphas 1- and beta-casein in the soluble phase of milk and produced casein micelles with properties different to those in untreated milk. Huppertz, Fox, and Kelly (2004c) demonstrated that the casein micelle size was not influenced by pressures less than 200 MPa, but a pressure of 250 MPa increased the micelle size by 25%, while pressures of 300 MPa or greater, irreversibly reduced the size to 50% ofthat in untreated milk. Denaturation of alpha-lactalbumin did not occur at pressures less than or equal to 400 MPa, whereas beta-lactoglobulin was denatured at pressures greater than 100 MPa.
Galazka, Ledward, Sumner, and Dickinson (1997) reported loss of surface hydrophobicity due to application of 300 MPa in dilute solution. Pressurizing beta-lactoglobulin at 450 MPa for 15 minutes resulted in reduced solubility in water. High-pressure treatment induced extensive protein unfolding and aggregation when BSA was pressurized at 400 MPa. Beta-lactoglobulin appears to be more sensitive to pressure than alpha-lactalbumin. Olsen, Ipsen, Otte, and Skibsted (1999) monitored the state of aggregation and thermal gelation properties of pressure-treated beta-lactoglobulin immediately after depressurization and after storage for 24 h at 50C. A pressure of 150 MPa applied for 30 min, or pressures higher than 300 MPa applied for 0 or 30 min, led to formation of soluble aggregates. When continued for 30 min, a pressure of 450 MPa caused gelation of the 5% beta-lactoglobulin solution. Iametti, Tansidico, Bonomi, Vecchio, Pittia, Rovere, and DaIl’Aglio (1997) studied irreversible modifications in the tertiary structure, surface hydrophobicity, and association state of beta-lactoglobulin, when solutions of the protein at neutral pH and at different concentrations, were exposed to pressure. Only minor irreversible structural modifications were evident even for treatments as intense as 15 min at 900 MPa. The occurrence of irreversible modifications was time-dependent at 600 MPa but was complete within 2 min at 900 MPa. The irreversibly modified protein was soluble, but some covalent aggregates were formed. Subirade, Loupil, Allain, and Paquin (1998) showed the effect of dynamic high pressure on the secondary structure of betalactoglobulin. Thermal and pH sensitivity of pressure treated beta-lactoglobulin was different, suggesting that the two forms were stabilized by different electrostatic interactions. Walker, Farkas, Anderson, and Goddik (2004) used high- pressure processing (510 MPa for 10 min at 8 or 24C) to induce unfolding of beta-lactoglobulin and characterized the protein structure and surface-active properties. The secondary structure of the protein processed at 8C appeared to be unchanged, whereas at 24C alpha-helix structure was lost. Tertiary structures changed due to processing at either temperature. Model solutions containing the pressure-treated beta-lactoglobulin showed a significant decrease in surface tension. Izquierdo, Alli, Gmez, Ramaswamy, and Yaylayan (2005) demonstrated that under high-pressure treatments (100-300 MPa), the β-lactoglobulin AB was completely hydrolyzed by pronase and α-chymotrypsin. Hinrichs and Rademacher (2005) showed that the denaturation kinetics of beta-lactoglobulin followed second order kinetics while for alpha-lactalbumin it was 2.5. Alpha- lactalbumin was more resistant to denaturation than beta- lactoglobulin. The activation volume for denaturation of beta- lactoglobulin was reported to decrease with increasing temperature, and the activation energy increased with pressure up to 200 MPa, beyond which it decreased. This demonstrated the unfolding of the protein molecules.
Drake, Harison, Apslund, Barbosa-Canovas, and Swanson (1997) demonstrated that the percentage moisture and wet weight yield of cheese from pressure treated milk were higher than pasteurized or raw milk cheese. The microbial quality was comparable and some textural defects were reported due to the excess moisture content. Arias, Lopez, and Olano (2000) showed that high-pressure treatment at 200 MPa significantly reduced rennet coagulation times over control samples. Pressurization at 400 MPa led to coagulation times similar to those of control, except for milk treated at pH 7.0, with or without readjustment of pH to 6.7, which presented significantly longer coagulation times than their non-pressure treated counterparts.
Hinrichs and Rademacher (2004) demonstrated that the isobaric (200-800 MPa) and isothermal (-2 to 70C) denaturation of beta- lactoglobulin and alpha-lactalbumin of whey protein followed 3rd and 2nd order kinetics, respectively. Isothermal pressure denaturation of both beta-lactoglobulin A and B did not differ significantly and an increase in temperature resulted in an increase in thedenaturation rate. At pressures higher than 200 MPa, the denaturation rate was limited by the aggregation rate, while the pressure resulted in the unfolding of molecules. The kinetic parameters of denaturation were estimated using a single step non- linear regression method, which allowed a global fit of the entire data set. Huppertz, Fox, and Kelly (2004d) examined the high- pressure induced denaturation of alpha-lactalbumin and beta- lactoglobulin in dairy systems. The higher level of pressure- induced denaturation of both proteins in milk as compared to whey was due to the absence of casein micelles and colloidal calcium phosphate in the whey.
The conformation of BSA was reported to remain fairly stable at 400 MPa due to a high number of disulfide bonds which are known to stabilize its three dimensional structure (Hayakawa, Kajihara, Morikawa, Oda, and Fujio, 1992). Kieffer and Wieser (2004) indicated that the extension resistance and extensibility of wet gluten were markedly influenced by high pressure (up to 800 MPa), while the temperature and the duration of pressure treatment (30-80C for 2-20 min) had a relatively lesser effect. The application of high pressure resulted in a marked decrease in protein extractability due to the restructuring of disulfide bonds under high pressure leading to the incorporation of alpha- and gamma-gliadins in the glutenin aggregate. The change in secondary structure following high- pressure treatment was also reported.
The pressure treatment of myosin led to head-to-head interaction to form oligomers (clumps), which became more compact and larger in size during storage at constant pressure. Even after pressure treatment at 210 MPa for 5 minutes, monomieric myosin molecules increased and no gelation was observed for pressure treatment up to 210 MPa for 30 minutes. Pressure treatment did not also affect the original helical structure of the tail in the myosin monomers. Angsupanich, Edde, and Ledward (1999) showed that high pressure- induced denaturation of myosin led to formation of structures that contained hydrogen bonds and were additionally stabilized by disulphide bonds.
Application of 750 MPa for 20 minutes resulted in dimerization of metmyoglobin in the pH range of 6-10, whereas maximum pH was not at isoelectric pH (6.9). Under acidic pH conditions, no dimers were formed (Defaye and Ledward, 1995). Zipp and Kouzmann ( 1973) showed the formation of precipitate when pressurized (750 MPa for 20 minutes) near the isoelectric point, the precipitate redissolved slowly during storage. Pressure treatment had no effect on lipid oxidation in the case of minced meat packed in air at pressure less than 300 MPa, while the oxidation increased proportionally at higher pressures. However, on exposure to higher pressure, minced meat in contact with air oxidized rapidly. Pressures > 300-400 MPa caused marked denaturation of both myofibriller and sarcoplasmic proteins in washed pork muscle and pork mince (Ananth, Murano and Dickson, 1995). Chapleau and Lamballerie (2003) showed that high-pressure treatment induced a threefold increase in the surface hydrophobicity of myofibrillar proteins between O and 450 MPa. Chapleau, Mangavel, Compoint, and Lamballerie (2004) reported that high pressure modified the secondary structure of myofibrillar proteins extracted from cattle carcasses. Irreversible changes and aggregation were reported at a pressure higher than 300 MPa, which can potentially affect the functional properties of meat products. Lamballerie, Perron, Jung, and Cheret (2003) indicated that high pressure treatment increases cathepsin D activity, and that pressurized myofibrils are more susceptible to cathepsin D action than non- pressurized myofibrils. The highest cathepsin D activity was observed at 300 MPa. Cariez, Veciana, and Cheftel ( 1995) demonstrated that L color values increased significantly in meat treated at 200-350 MPa, the meat becoming pink, and a-value decreased in meat treated at 400-500 MPa to give a grey-brown color. The total extractable myoglobin decreased in meat treated at 200- 500 MPa, while the metmyoglobin content of meat increased and the oxymyoglobin decreased at 400500 MPa. Meat discoloration from pressure processing resulted in a whitening effect at 200-300 MPa due to globin denaturation, and/or haem displacement/release, or oxidation of ferrous myoglobin to ferric myoglobin at pressure higher than 400 MPa.
The conformation of the main protein component of egg white, ovalbumin, remains fairly stable when pressurized at 400 MPa, may be due to the four disulfide bonds and non-covalent interactions stabilizing the three dimensional structure of ovalbumin (Hayakawa, Kajihara, Morikawa, Oda, and Fujio, 1992). Hayashi, Kawamura, Nakasa and Okinada (1989) reported irreversible denaturation of egg albumin at 500-900 MPa with concomitant increase in susceptibility to subtilisin. Zhang, Li, and Tatsumi (2005) demonstrated that the pressure treatment (200-500 MPa) resulted in denaturation of ovalbumin. The surface hydrophobicity of ovalbumin was found to increase with increase in pressure treatment and the presence of polysaccharide protected the protein against denaturation. Iametti, Donnizzelli, Pittia, Rovere, Squarcina, and Bonomi (1999) showed that the addition of NaCl or sucrose to egg albumin prior to high- pressure treatment (up to 10 min at 800 MPa) prevented insolubulization or gel formation after pressure treatment. As a consequence of protein unfolding, the treated albumin had increased viscosity but retained its foaming and heat-gelling properties. Farr (1990) reported the modification of functionality of egg proteins. Egg yolk formed a gel when subjected to a pressure of 400 MPa for 30 minutes at 25C, kept its original color, and was soft and adhesive. The hardness of the pressure treated gel increased and adhesiveness decreased with an increase in pressure. Plancken, Van Loey, and Hendrickx (2005) showed that the application of high pressure (400- 700 MPa) to egg white solution resulted in an increase in turbidity, surface hydrophobicity, exposed sulfhydryl content, and susceptibility to enzymatic hydrolysis, while it resulted in a decrease in protein solubility, total sulfhydryl content, denaturation enthalpy, and trypsin inhibitory activity. The pressure- induced changes in these properties were shown to be dependent on the pressuretemperature and the pH of the solution. Speroni, Puppo, Chapleau, Lamballerie, Castellani, Aon, and Anton (2005) indicated that the application of high pressure (200-600 MPa) at 2OC to low- density lipoproteins did not change the solubility even if the pH is changed, whereas aggregation and protein denaturation were drastically enhanced at pH 8. Further, the application of high- pressure under alkaline pH conditions resulted in decreased droplet flocculation of low-density lipoproteins dispersions.
The minimum pressure required for the inducing gelation of soya proteins was reported to be 300 MPa for 10-30 minutes and the gels formed were softer with lower elastic modulus in comparison with heat-treated gels (Okamoto, Kawamura, and Hayashi, 1990). The treatment of soya milk at 500 MPa for 30 min changed it from a liquid state to a solid state, whereas at lower pressures and at 500 MPa for 10 minutes, the milk remained in a liquid state, but indicated improved emulsifying activity and stability (Kajiyama, Isobe, Uemura, and Noguchi, 1995). The hardness of tofu gels produced by high-pressure treatment at 300 MPa for 10 minutes was comparable to heat induced gels. Puppo, Chapleau, Speroni, Lamballerie, Michel, Anon, and Anton (2004) demonstrated that the application of high pressure (200-600 MPa) on soya protein isolate at pH 8.0 resulted in an increase in a protein hydorphobicity and aggregation, a reduction of free sulfhydryl content and a partial unfolding of the 7S and 11S fractions at pH 8. The change in the secondary structure leading to a more disordered structure was also reported. Whereas at pH 3.0, the protein was partially denatured and insoluble aggregates were formed, the major molecular unfolding resulted in decreased thermal stability, increased protein solubility, and hydorphobicity. Puppo, Speroni, Chapleau, Lamballerie, An, and Anton (2005) studied the effect of high- pressure (200, 400, and 600 MPa for 10 min at 10C) on the emulsifying properties of soybean protein isolates at pH 3 and 8 (e.g. oil droplet size, flocculation, interfacial protein concentration, and composition). The application of pressure higher than 200 MPa at pH 8 resulted in a smaller droplet size and an increase in the levels of depletion flocculation. However, a similar effect was not observed at pH 3. Due to the application of high pressure, bridging flocculation decreased and the percentage of adsorbed proteins increased irrespective of the pH conditions. Moreover, the ability of the protein to be adsorbed at the oil- water interface increased. Zhang, Li, Tatsumi, and Isobe (2005) showed that the application of high pressure treatment resulted in the formation of more hydrophobic regions in soy protein, which dissociated into subunits, which in some cases formed insoluble aggregates. High-pressure denaturation of beta-conglycinin (7S) and glycinin (11S) occurred at 300 and 400 MPa, respectively. The gels formed had the desirable strength and a cross-linked network microstructure.
Soybean whey is a by-product of tofu manufacture. It is a good source of peptides, proteins, oligosaccharides, and isoflavones, and can be used in special foods for the elderly persons, athletes, etc. Prestamo and Penas (2004) studied the antioxidative activity of soybean whey proteins and their pepsin and chymotrypsin hydrolysates. The chymotrypsin hydrolysate showed a higher antioxidative activity than the non-hydrolyzed protein, but the pepsin hydrolysate showed an opposite trend. High pressure processing at 100 MPa inc\reased the antioxidative activity of soy whey protein, but decreased the antioxidative activity of the hydrolysates. High pressure processing increased the pH of the protein hydrolysates. Penas, Prestamo, and Gomez (2004) demonstrated that the application of high pressure (100 and 200 MPa, 15 min, 37C) facilitated the hydrolysis of soya whey protein by pepsin, trypsin, and chymotrypsin. It was shown that the highest level of hydrolysis occurred at a treatment pressure of 100 MPa. After the hydrolysis, 5 peptides under 14 kDa with trypsin and chymotrypsin, and 11 peptides with pepsin were reported.
COMBINATION OF HIGHPRESSURE TREATMENT WITH OTHER NON-THERMAL PROCESSING METHODS
Many researchers have combined the use of high pressure with other non-thermal operations in order to explore the possibility of synergy between processes. Such attempts are reviewed in this section.
Crawford, Murano, Olson, and Shenoy (1996) studied the combined effect of high pressure and gamma-irradiation for inactivating Clostridium spmgenes spores in chicken breast. Application of high pressure reduced the radiation dose required to produce chicken meat with extended shelf life. The application of high pressure (600 MPa for 20 min at 8O0C) reduced the irradiation doses required for one log reduction of Clostridium spmgenes from 4.2 kGy to 2.0 kGy. Mainville, Montpetit, Durand, and Farnworth (2001) studied the combined effect of irradiation and high pressure on microflora and microorganisms of kefir. The irradiation treatment of kefir at 5 kGy and high-pressure treatment (400 MPa for 5 or 30 min) deactivated the bacteria and yeast in kefir, while leaving the proteins and lipids unchanged.
The exposure of microbial cells and spores to an alternating current (50 Hz) resulted in the release of intracellular materials causing loss or denaturation of cellular components responsible for the normal functioning of the cell. The lethal damage to the microorganisms enhanced when the organisms are exposed to an alternating current before and after the pressure treatment. High- pressure treatment at 300 MPa for 10 min for Escherichia coli cells and 400 MPa for 30 min for Bacillus subtalis spores, after the alternating current treatment, resulted in reduced surviving fractions of both the organisms. The combined effect was also shown to reduce the tolerant level of microorganisms to other challenges (Shimada and Shimahara, 1985, 1987; Shimada, 1992).
The pretreatment with ultrasonic waves (100 W/cm^sup 2^ for 25 min at 25C) followed by high pressure (400 MPa for 25 min at 15C) was shown to result in complete inactivation of Rhodoturola rubra. Neither ultrasonic nor high-pressure treatment alone was found to be effective (Knorr, 1995).
Carbon Dioxide and Argon
Heinz and Knorr (1995) reported a 3 log reduction of supercritical CO2 pretreated cultures. The effect of the pretreatment on germination of Bacillus subtilis endospores was monitored. The combination of high pressure and mild heat treatment was the most effective in reducing germination (95% reduction), but no spore inactivation was observed.
Park, Lee, and Park (2002) studied the combination of high- pressure carbon dioxide and high pressure as a nonthermal processing technique to enhance the safety and shelf life of carrot juice. The combined treatment of carbon dioxide (4.90 MPa) and high-pressure treatment (300 MPa) resulted in complete destruction of aerobes. The increase in high pressure to 600 MPa in the presence of carbon dioxide resulted in reduced activities of polyphenoloxidase (11.3%), lipoxygenase (8.8%), and pectin methylesterase (35.1%). Corwin and Shellhammer (2002) studied the combined effect of high-pressure treatment and CO2 on the inactivation of pectinmethylesterase, polyphenoloxidase, Lactobacillus plantarum, and Escherichia coli. An interaction was found between CO2 and pressure at 25 and 50C for pectinmethylesterase and polyphenoloxidase, respectively. The activity of polyphenoloxidase was decreased by CO2 at all pressure treatments. The interaction between CO2 and pressure was significant for Lactobacillus plantarum, with a significant decrease in survivors due to the addition of CO2 at all pressures studied. No significant effect on E. coli survivors was seen with CO2 addition. Truong, Boff, Min, and Shellhammer (2002) demonstrated that the addition of CO2 (0.18 MPa) during high pressure processing (600 MPa, 25C) of fresh orange juice increases the rate of PME inactivation in Valencia orange juice. The treatment time due to CO2 for achieving the equivalent reduction in PME activity was from 346 s to 111 s, but the overall degree of PME inactivation remained unaltered.
Fujii, Ohtani, Watanabe, Ohgoshi, Fujii, and Honma (2002) studied the high-pressure inactivation of Bacillus cereus spores in water containing argon. At the pressure of 600 MPa, the addition of argon reportedly accelerated the inactivation of spores at 20C, but had no effect on the inactivation at 40C.
The complex physicochemical environment of milk exerted a strong protective effect on Escherichia coli against high hydrostatic pressure inactivation, reducing inactivation from 7 logs at 400 MPa to only 3 logs at 700 MPa in 15 min at 20C. A substantial improvement in inactivation efficiency at ambient temperature was achieved by the application of consecutive, short pressure treatments interrupted by brief decompressions. The combined effect of high pressure (500 MPa) and natural antimicrobial peptides (lysozyme, 400 g/ml and nisin, 400 g/ml) resulted in increased lethality for Escherichia coli in milk (Garcia, Masschalck, and Michiels, 1999).
OPPORTUNITIES FOR HIGH PRESSURE ASSISTED PROCESSING
The inclusion of high-pressure treatment as a processing step within certain manufacturing flow sheets can lead to novel products as well as new process development opportunities. For instance, high pressure can precede a number of process operations such as blanching, dehydration, rehydration, frying, and solid-liquid extraction. Alternatively, processes such as gelation, freezing, and thawing, can be carried out under high pressure. This section reports on the use of high pressures in the context of selected processing operations.
Eshtiaghi and Knorr (1993) employed high pressure around ambient temperatures to develop a blanching process similar to hot water or steam blanching, but without thermal degradation; this also minimized problems associated with water disposal. The application of pressure (400 MPa, 15 min, 20C) to the potato sample not only caused blanching but also resulted in a four-log cycle reduction in microbial count whilst retaining 85% of ascorbic acid. Complete inactivation of polyphenoloxidase was achieved under the above conditions when 0.5% citric acid solution was used as the blanching medium. The addition of 1 % CaCl^sub 2^ solution to the medium also improved the texture and the density. The leaching of potassium from the high-pressure treated sample was comparable with a 3 min hot water blanching treatment (Eshtiaghi and Knorr, 1993). Thus, high- pressures can be used as a non-thermal blanching method.
Dehydration and Osmotic Dehydration
The application of high hydrostatic pressure affects cell wall structure, leaving the cell more permeable, which leads to significant changes in the tissue architecture (Fair, 1990; Dornenburg and Knorr, 1994, Rastogi, Subramanian, and Raghavarao, 1994; Rastogi and Niranjan, 1998; Rastogi, Raghavarao, and Niranjan, 2005). Eshtiaghi, Stute, and Knorr (1994) reported that the application of pressure (600 MPa, 15 min at 70C) resulted in no significant increase in the drying rate during fluidized bed drying of green beans and carrot. However, the drying rate significantly increased in the case of potato. This may be due to relatively limited permeabilization of carrot and beans cells as compared to potato. The effects of chemical pre-treatment (NaOH and HCl treatment) on the rates of dehydration of paprika were compared with products pre-treated by applying high pressure or high intensity electric field pulses (Fig. 2). High-pressure (400 MPa for 10 min at 25C) and high intensity electric field pulses (2.4 kV/cm, pulse width 300 s, 10 pulses, pulse frequency 1 Hz) were found to result in drying rates comparable with chemical pre-treatments. The latter pre-treatments, however, eliminated the use of chemicals (Ade- Omowaye, Rastogi, Angersbach, and Knorr, 2001).
Figure 2 (a) Effects of various pre-treatments such as hot water blanching, high pressure and high intensity electric field pulse treatment on dehydration characteristics of red paprika (b) comparison of drying time (from Ade-Omowaye, Rastogi, Angersbach, and Knorr, 2001).
Figure 3 (a) Variation of moisture and (b) solid content (based on initial dry matter content) with time during osmotic dehydration (from Rastogi and Niranjan, 1998).
Generally, osmotic dehydration is a slow process. Application of high pressures causes permeabilization of the cell structure (Dornenburg and Knorr, 1993; Eshtiaghi, Stute, and Knorr, 1994; Fair, 1990; Rastogi, Subramanian, and Raghavarao, 1994). This phenomenon has been exploited by Rastogi and Niranjan (1998) to enhance mass transfer rates during the osmotic dehydration of pineapple (Ananas comsus). High-pressure pre-treatments (100-800 MPa) were found to enhance both water removal as well as solid gain (Fig. 3). Measured diffusivity values for water were found to be four-fold greater, whilst solute (sugar) diffusivity values were found to be two-fold greater. Compression and decompression occurring during high pressure pre-treatment itself caused the removal of a significant amount of water, which was attributed to the cell wall rupture (Rastogi and Niranjan, 1998). Differential interference contrast microscopic examination showed the ext\ent of cell wall break-up with applied pressure (Fig. 4). Sopanangkul, Ledward, and Niranjan (2002) demonstrated that the application of high pressure (100 to 400 MPa) could be used to accelerate mass transfer during ingredient infusion into foods. Application of pressure opened up the tissue structure and facilitated diffusion. However, higher pressures above 400 MPa induced starch gelatinization also and hindered diffusion. The values of the diffusion coefficient were dependent on cell permeabilization and starch gelatinization. The maximum value of diffusion coefficient observed represented an eight-fold increase over the values at ambient pressure.
The synergistic effect of cell permeabilization due to high pressure and osmotic stress as the dehydration proceeds was demonstrated more clearly in the case of potato (Rastogi, Angersbach, and Knorr, 2000a, 2000b, 2003). The moisture content was reduced and the solid content increased in the case of samples treated at 400 MPa. The distribution of relative moisture (M/M^sub o^) and solid (S/S^sub o^) content as well as the cell permeabilization index (Zp) (shown in Fig. 5) indicate that the rate of change of moisture and solid content was very high at the interface and decreased towards the center (Rastogi, Angersbach, and Knorr, 2000a, 2000b, 2003).
Most dehydrated foods are rehydrated before consumption. Loss of solids during rehydration is a major problem associated with the use of dehydrated foods. Rastogi, Angersbach, Niranjan, and Knorr (2000c) have studied the transient variation of moisture and solid content during rehydration of dried pineapples, which were subjected to high pressure treatment prior to a two-stage drying process consisting of osmotic dehydration and finish-drying at 25C (Fig. 6). The diffusion coefficients for water infusion as well as for solute diffusion were found to be significantly lower in high-pressure pre- treated samples. The observed decrease in water diffusion coefficient was attributed to the permeabilization of cell membranes, which reduces the rehydration capacity (Rastogi and Niranjan, 1998). The solid infusion coefficient was also lower, and so was the release of the cellular components, which form a gel- network with divalent ions binding to de-esterified pectin (Basak and Ramaswamy, 1998; Eshtiaghi, Stute, and Knorr, 1994; Rastogi Angersbach, Niranjan, and Knorr, 2000c). Eshtiaghi, Stute, and Knorr (1994) reported that high-pressure treatment in conjunction with subsequent freezing could improve mass transfer during rehydration of dried plant products and enhance product quality.
Figure 4 Microstructures of control and pressure treated pineapple (a) control; (b) 300 MPa; (c) 700 MPa. ( 1 cm = 41.83 m) (from Rastogi and Niranjan, 1998).
Ahromrit, Ledward, and Niranjan (2006) explored the use of high pressures (up to 600 MPa) to accelerate water uptake kinetics during soaking of glutinous rice. The results showed that the length and the diameter the of the rice were positively correlated with soaking time, pressure and temperature. The water uptake kinetics was shown to follow the well-known Fickian model. The overall rates of water uptake and the equilibrium moisture content were found to increase with pressure and temperature.
Zhang, Ishida, and Isobe (2004) studied the effect of highpressure treatment (300-500 MPa for 0-380 min at 20C) on the water uptake of soybeans and resulting changes in their microstructure. The NMR analysis indicated that water mobility in high-pressure soaked soybean was more restricted and its distribution was much more uniform than in controls. The SEM analysis revealed that high pressure changed the microstructures of the seed coat and hilum, which improved water absorption and disrupted the individual spherical protein body structures. Additionally, the DSC and SDS-PAGE analysis revealed that proteins were partially denatured during the high pressure soaking. Ibarz, Gonzalez, Barbosa-Canovas (2004) developed the kinetic models for water absorption and cooking time of chickpeas with and without prior high-pressure treatment (275-690 MPa). Soaking was carried out at 25C for up to 23 h and cooking was achieved by immersion in boiling water until they became tender. As the soaking time increased, the cooking time decreased. High-pressure treatment for 5 min led to reductions in cooking times equivalent to those achieved by soaking for 60-90 min.
Ramaswamy, Balasubramaniam, and Sastry (2005) studied the effects of high pressure (33, 400 and 700 MPa for 3 min at 24 and 55C) and irradiation (2 and 5 kGy) pre-treatments on hydration behavior of navy beans by soaking the treated beans in water at 24 and 55C. Treating beans under moderate pressure (33 MPa) resulted in a high initial moisture uptake (0.59 to 1.02 kg/kg dry mass) and a reduced loss of soluble materials. The final moisture content after three hours of soaking was the highest in irradiated beans (5 kGy) followed by high-pressure treatment (33 MPa, 3 min at 55C). Within the experimental range of the study, Peleg’s model was found to satisfactorily describe the rate of water absorption of navy beans.
A reduction of 40% in oil uptake during frying was observed, when thermally blanched frozen potatoes were replaced by high pressure blanched frozen potatoes. This may be due to a reduction in moisture content caused by compression and decompression (Rastogi and Niranjan, 1998), as well as the prevalence of different oil mass transfer mechanisms (Knorr, 1999).
Solid Liquid Extraction
The application of high pressure leads to rearrangement in tissue architecture, which results in increased extractability even at ambient temperature. Extraction of caffeine from coffee using water could be increased by the application of high pressure as well as increase in temperature (Knorr, 1999). The effect of high pressure and temperature on caffeine extraction was compared to extraction at 100C as well as atmospheric pressure (Fig. 7). The caffeine yield was found to increase with temperature at a given pressure. The combination of very high pressures and lower temperatures could become a viable alternative to current industrial practice.
Figure 5 Distribution of (a, b) relative moisture and (c, d) solid content as well as (e, f) cell disi | 1 | 12 |
<urn:uuid:8146b9ea-7ecd-41c5-998c-4e12edee7f49> | There is limited data on the nutritional status of Asian patients with various aetiologies of cirrhosis. This study aimed to determine the prevalence of malnutrition and to compare nutritional differences between various aetiologies.
A cross-sectional study of adult patients with decompensated cirrhosis was conducted. Nutritional status was assessed using standard anthropometry, serum visceral proteins and subjective global assessment (SGA).
Thirty six patients (mean age 59.8 ± 12.8 years; 66.7% males; 41.6% viral hepatitis; Child-Pugh C 55.6%) with decompensated cirrhosis were recruited. Malnutrition was prevalent in 18 (50%) patients and the mean caloric intake was low at 15.2 kcal/kg/day. SGA grade C, as compared to SGA grade B, demonstrated significantly lower anthropometric values in males (BMI 18.1 ± 1.6 vs 26.3 ± 3.5 kg/m2, p < 0.0001; MAMC 19.4 ± 1.5 vs 24.5 ± 3.6 cm, p = 0.002) and females (BMI 19.4 ± 2.7 vs 28.9 ± 4.3, p = 0.001; MAMC 18.0 ± 0.9 vs 28.1 ± 3.6, p < 0.0001), but not with visceral proteins. The SGA demonstrated a trend towards more malnutrition in Child-Pugh C compared to Child-Pugh B liver cirrhosis (40% grade C vs 25% grade C, p = 0.48). Alcoholic cirrhosis had a higher proportion of SGA grade C (41.7%) compared to viral (26.7%) and cryptogenic (28.6%) cirrhosis, but this was not statistically significant.
Significant malnutrition in Malaysian patients with advanced cirrhosis is common. Alcoholic cirrhosis may have more malnutrition compared to other aetiologies of cirrhosis.
Cirrhosis of the liver is a devastating condition, commonly the result of decades of chronic inflammation from toxin (eg alcohol), viral infection (eg Hepatitis B) or immune mediated disease (eg autoimmune disease). As a result of the complex pathophysiological processes associated with cirrhosis, it results in significant morbidity such as gastrointestinal bleeding from portal hypertension, and eventual mortality in many patients . The prognosis of patients with advanced cirrhosis is grim, with a 5-year survival rate of <10%. Patients with decompensated liver cirrhosis form the majority of cases that are admitted into gastroenterology units world-wide and represent a significant burden on health-care resources .
In addition to the associated morbidity highlighted above, protein-energy malnutrition (PEM) has often been observed in patients with liver cirrhosis [3,4]. Previous studies in Western patients have documented malnutrition rates from 20% in compensated liver cirrhosis up to 60% in decompensated liver cirrhosis . Causes for malnutrition in liver cirrhosis are known to include a reduction in oral intake (for various causes), increased protein catabolism and insufficient synthesis, and malabsorption/maldigestion associated with portal hypertension [3,5,6]. Although a consequence of the disease, malnutrition alone can lead to further morbidity in patients with liver cirrhosis. Increased rates of septic complications, poorer quality of life, and a reduced life span have all been observed in cirrhotics with poorer nutrition status compared to those without [7,8].
In Asia, the high prevalence of chronic Hepatitis B infection, has resulted in large numbers of people developing liver cirrhosis with its' associated complications . Most of the data on malnutrition in patients with cirrhosis have been derived from Western patients in whom chronic alcohol ingestion has been the commonest aetiology. Alcoholic patients are known to develop malnutrition for other reasons apart from liver damage per se . It is uncertain, therefore, if Asian patients with cirrhosis have the same degree of malnutrition and its' resultant morbidity as patients with cirrhosis from other parts of the world.
The aims of this study were: a) to determine the prevalence of malnutrition in Malaysian patients with cirrhosis using standard nutritional assessment tools and b) to compare nutritional differences between various aetiologies.
Local institutional ethics committee approval was sought before commencement of the study. A cross-sectional study of Asian patients admitted for decompensation of cirrhosis to this tertiary institution, between August 2006 and March 2007, was undertaken. The inclusion criteria were adults aged 18 years and above, admitted for the reason of decompensation of cirrhosis. Patients with hepatocellular carcinoma and severe, i.e. Grade 3 or 4, hepatic encephalopathy were excluded. Eligible patients were given an information sheet in both English and Malay language detailing the objectives and nature of the study. Informed consent was obtained in all patients prior to participation.
Cirrhosis was diagnosed based on a combination of clinical features, blood profile and radiological imaging. Clinical features were those of portal hypertension, i.e. ascites and/or gastrointestinal varices. Blood profile included evidence of thrombocytopenia and/or coagulopathy. Radiological features, either with trans-abdominal ultrasound or computerized tomography, had to demonstrate a small shrunken liver with or without splenomegaly and intra-abdominal varices. Severity of liver disease was calculated according to the Child-Pugh score with grades A (mild) to C (severe) indicating degree of hepatic reserve and function .
Nutritional assessment was based on the following: anthropometry, visceral proteins, lean body mass and subjective global assessment (SGA). All measurements were taken by the same single investigator, to avoid any inter-observer variation.
All patients in the study had a baseline body mass index (BMI), i.e. weight (kg)/height (meter) ² performed. Although a crude measure of nutritional status, BMI was used as a baseline comparison between cirrhotic patients and the local healthy population . Further anthropometric measurements included the following: midarm circumference (MAC), triceps skinfold thickness (TST), midarm muscle circumference (MAMC) and handgrip strength. MAC was measured to the nearest centimeter with a measuring tape at the right arm. TST, an established measure of fat stores, was measured to the nearest millimeter at the right arm using Harpenden skinfold caliper (Baty Ltd, British Indicators) in a standard manner . Three measurements were taken for both TST and MAC, with average values calculated and recorded. Mid-arm muscle circumference (MAMC), an established measure of muscle protein mass, was calculated from MAC and TST using a standard formula: MAMC = MAC - (3.1415*TSF) .
Handgrip strength, a simple and effective tool to measure nutritional status, was measured with a hydraulic hand dynamometer (JAMAR) in kilogram force (Kg/F) . Three measurements were made on each arm and an average taken from all measurements. A combination of handgrip strength <30 kg/F and MAMC <23 cm had previously been shown to have a 94% sensitivity and 97% negative predictive value in identifying malnourished patients .
Serum albumin concentration is the most frequently used laboratory measure of nutritional status. Although non-specific, it has been used to assess change in nutritional status and stratifying risk of malnutrition . A reduction in serum albumin in the absence of other causes has been shown to represent liver damage and hence forms part of the normal items for a classic liver function test. Serum transferrin has a half-life of 9 days, and can be used as a marker for malnutrition. Good correlation between transferrin level with the Child-Pugh score has been demonstrated before and a reduced level of serum transferrin is additionally indicative of decreased caloric intake.
Subjective global assessment
Subjective global assessment (SGA) is a simple evaluation tool that allows physicians to incorporate clinical findings and subjective patient history into a nutritional assessment . Based on history taking and physical examination, nutritional ratings of patients are obtained as follows: well-nourished-A, moderately malnourished-B and severely malnourished-C. The SGA has been shown to be a valid and useful clinical nutritional assessment tool for patients of various medical conditions .
Malnutrition was defined as <5th percentile MAMC for purposes of standardization with the literature and for accurate comparisons with other cirrhotic populations . However, it is recognized that other markers of malnutrition such hand grip strength and SGA have been used, albeit in fewer studies.
Dietary intake and assessment
Assessment of individual patient's oral intake during hospitalization was determined by the dietary recall method done every three days for two weeks and an average intake was calculated and recorded. The objective was to determine the adequacy of caloric intake per patient with minimum reporting bias. Calculation of calories of food and drinks intake (composition of the diet) was based on local reference data .
All data was entered into Statistical Packages for the Social Sciences (SPSS) version 13.0 (Chicago, Illinois, USA) software for analysis. Continuous variables were expressed as means with standard deviation and analysed with student's t-test or Mann-Whitney where appropriate whilst categorical data were analysed using the χ2 test. For the comparison of nutritional status in cirrhotic patients of various etiologies, SGA was also utilized as this has been shown to reliably identify malnutrition-related muscle dysfunction . Statistical significance was assumed at a p value of <0.05.
A total of 36 patients with decompensated liver cirrhosis were recruited during the study period. The basic demography and clinical features are highlighted in Table 1. The mean age of the patients was 59.8 ± 12.8 years and the most common reason for admission was tense ascites requiring paracentesis. Viral hepatitis (n = 15, 41.6%) and alcoholic liver disease (n = 12, 33.3%) were the most common aetiology of cirrhosis. 7/12 patients with alcoholic liver disease had an active alcohol intake at the time of the study. All patients had advanced liver disease with 16 (44.4%) cases of Child-Pugh B and 20 (55.6%) cases of Child-Pugh C cirrhosis.
Table 1. Patient profile
Malnutrition, i.e. MAMC < 5th percentile, was prevalent in 18/36 (50%) patients and the mean caloric intake of all cirrhotic patients was low at 15.2 kcal/kg/day. Biochemically, the mean serum albumin (20.6 ± 6.0 g/l) and the mean serum transferrin (1.6 ± 0.7 g/l) were lower than normal values. 24 (66.7%) patients had SGA Grade B nutritional status and 12 (33.3%) were SGA Grade C, i.e. all patients had some level of malnutrition based on the SGA scale.
Table 2 illustrates the nutritional parameters of the subjects, according to SGA grades. As expected, in both male and female patients with cirrhosis, mean values of anthropometric measurements such as BMI, MAMC and TST demonstrated significant differences in values between cirrhotic patients in SGA grades B and C, with a higher SGA grade correlating well with lower anthropometric values. However, this difference was not observed with visceral proteins such as serum albumin or transferrin (Table 2).
Table 2. Nutritional parameters in patients with cirrhosis according to gender and Subjective Global Assessment grades
Differences in nutritional status in Child-Pugh B and C liver cirrhosis was assessed with the SGA (Table 3). There was a higher proportion of patients with SGA grade C in Child-Pugh C cirrhotic compared to Child-Pugh B cirrhotic patients, although this was not statistically significant ( 40% vs 25%, p = 0.48). However, serum albumin (17.9 ± 4.4 vs 24.1 ± 6.0 g/L, p = 0.001) and transferrin (1.3 ± 0.6 vs 2.0 ± 0.5 g/L, p < 0.0001) levels were demonstrated to be significantly lower in patients with Child-Pugh C liver cirrhosis compared to those with Child-Pugh B disease. Caloric intake was further observed to be significantly less in patients with Child-Pugh C disease compared to patients with Child-Pugh B disease (13.3 ± 4.9 vs 17.6 ± 5.7 Kcal/kg/day, p = 0.018).
Table 3. Subjective Global Assessment in varying severity and aetiologies of cirrhosis
Aetiology of liver disease and nutritional parameters
The incidence of malnutrition, defined as % < 5th percentile MAMC, in the different aetiologies of cirrhosis were as follows: Alcoholic liver disease n = 9/12 (75%), ViralHepatitis (Hepatitis B & C combined) n = 5/15 (33.3%), Cryptogenic n = 2/7 (28.6%) and Autoimmune n = 1/2 (50%). Differences in nutritional status between the various aetiologies of cirrhosis were examined with the SGA (Table 3). Excluding the extremely small number of autoimmune cirrhotic patients, there was a non-statistically significant increase in the proportion of SGA grade C cases in patients with an alcoholic aetiology (41.7%), compared to those with a viral (26.7%) and cryptogenic (28.6%) aetiology for cirrhosis.
This study of nutritional assessment in Malaysian patients with advanced cirrhosis has several limitations. The sample size was small, resulting in some limitations with the relevance of the results from the study. Furthermore, the study was conducted on a selected group of patients with cirrhosis, namely those with advanced end-stage disease who had been admitted to hospital for decompensation. Additionally, a significant proportion of patients with ascites did not have dry weight measurements done, which could have influenced BMI and calorie calculation results. Nevertheless, this study provides useful nutritional data which is currently lacking among Asian patients with advanced cirrhosis.
This study demonstrated that the prevalence of malnutrition, defined by MAMC < 5th percentile, was 50% in Malaysian patients with advanced cirrhosis. The patients with cirrhosis exhibited a range of nutritional abnormalities, with protein-energy malnutrition of 50% (MAMC < 5th percentile) and fat store depletions of 30% (TST <5th percentile). BMI measurements in less malnourished cirrhotic patients were not different from the general population, mainly due to the fact that ascites and peripheral oedema contributed significantly to body weight in cirrhotic patients, and true lean body mass was not taken into account . The poor caloric intake of 15.2 kcal/kg/day is lower than the recommended level (24 - 40 kcal/kg/day ), and may have been one of the causes of this malnutrition, although other factors are well recognized [3,5].
The level of malnutrition identified in this study appears to be comparable to published data from Italy (34% of cirrhotics with MAMC < 5th percentile) , a hospital-based study of 315 patients from France (58.7% of Child-Pugh C cirrhotic patients with MAMC < 5th percentile) and a previous study from Thailand (38% of cirrhotics with TSF <10th percentile) . This data suggests that nutritional deficiencies in cirrhosis are likely to be uniform worldwide, regardless of the ethnic distribution or socioeconomic status (believed to be higher in Western patients compared to Asians) of the population involved.
This study further supported the utility of the SGA in Asian patients with cirrhosis. Although anthropometric tools such as the MAMC and hand grip strength are known to be better predictors of malnutrition in adult patients with cirrhosis , these tools are not necessarily practical for everyday use. The SGA, compared to standard anthropometry, is much more applicable in clinical practice and has previously been demonstrated to be highly predictive of malnutrition in advanced cirrhosis . We demonstrated in this study that SGA grade C patients with cirrhosis had significantly lower anthropometric measurements compared to SGA grade B cases, indicating that the SGA was able to differentiate nutritional status fairly well.
In terms of clinical severity, we were able to demonstrate a trend towards a higher proportion of SGA grade C in patients with Child-Pugh C cirrhosis compared to Child-Pugh B disease. The lack of statistical significance in this observation was probably a result of the small sample size of our study population, i.e. a Type II statistical error. Furthermore, the caloric intake in patients with more advanced cirrhosis was significantly lower with a likelihood of more malnutrition in this group. In this study, we demonstrated that serum visceral protein levels did not differ significantly between SGA Grade B and C, but varied markedly between Child-Pugh B and C liver disease. This indicated that visceral proteins were not influenced by nutritional status but more by the severity of hepatic dysfunction .
Differences in malnutrition between various aetiologies of cirrhosis were explored in this study. The frequency of malnutrition in alcohol-related cirrhosis was higher than other aetiologies and the SGA demonstrated a trend towards more severe malnutrition in adults with alcoholic cirrhosis compared to other types of cirrhosis. The latter was not statistically significant, probably as result of the small number of patients in this study. One of the possible explanations for this finding was that 7/12 alcoholic patients were still actively consuming alcohol at the time of the study, leading to more severe nutritional deficiencies in these patients as previously reported . Our findings are in agreement with studies that have been conducted in larger populations. In a study of 1402 patients with cirrhosis in Italy, there was a higher incidence of malnutrition in alcoholic cirrhosis patients compared to other aetiologies of liver cirrhosis . In a Thai study of 60 patients with cirrhosis, the degree of malnutrition was higher in patients with alcoholic cirrhosis and these patients had more complications of cirrhosis compared to other aetiologies .
In summary, malnutrition in Malaysian patients with various aetiologies of cirrhosis is common, together with an inadequate caloric intake. Clinical assessment with the SGA demonstrated a trend towards more malnutrition with increasing clinical severity and in alcohol related liver disease, although this was not statistically significant. Serum visceral proteins were not found to be an appropriate tool for nutritional assessment in adults with decompensated cirrhosis. A study with a larger sample is required to substantiate these findings.
The authors declare that they have no competing interests.
MLST designed the study, performed data collection, data analysis and drafted the manuscript. KLG provided administrative support. SHMT provided technical support. SR assisted in data analysis and interpretation. SM assisted in data interpretation and critical revision of the manuscript. All authors reviewed and approved final version of the manuscript.
This study was funded by the following bodies:
1. Long-Term Research fund (Vote F), University of Malaya (Vote no: FQ020/2007A)
2. Educational grant from the Malaysian Society of Gastroenterology and Hepatology
Ann Intern Med 1967, 66(1):165-198. PubMed Abstract
Coltorti M, Del Vecchio-Blanco C, Caporaso N, Gallo C, Castellano L: Liver cirrhosis in Italy. A multicentre study on presenting modalities and the impact on health care resources. National Project on Liver Cirrhosis Group.
Ital J Gastroenterol 1991, 23(1):42-48. PubMed Abstract
J Med Assoc Thai 2001, 84(7):982-988. PubMed Abstract
Southeast Asian J Trop Med Public Health 1979, 10(4):621-626. PubMed Abstract
J Fla Med Assoc 1979, 66(4):463-465. PubMed Abstract
Med J Malaysia 2000, 55(1):108-128. PubMed Abstract
Figueiredo FA, Dickson ER, Pasha TM, Porayko MK, Therneau TM, Malinchoc M, DiCecco SR, Francisco-Ziller NM, Kasparova P, Charlton MR: Utility of standard nutritional parameters in detecting body cell mass depletion in patients with end-stage liver disease. | 1 | 2 |
<urn:uuid:9d2d92e7-d38b-4d17-b328-511f440971a5> | We've worked out an exclusive deal for our members to bring you this product at a price lower than what everyone else pays anywhere on the internet!
The definitive answer to correct lip-sync error for up to four sources!
When you watch TV or movies, do you ever notice how picture and sound are sometimes OUT OF SYNC? The presenter's lips don't move quite at the same time as their voice? Irritating isn't it. This is known as lip sync error.
Even if you haven't consciously noticed lip-sync error (we avoid this impossibility by subconsciously looking away) b]research at Stanford University[/b] discovered it causes a negative impact on our perception of the characters and story.
Lip sync error affects a huge number of displays, including modern plasma TVs, LCD screens, DLP TVs and digital projectors.
The Felston DD740 solves the frustrating problem of lip sync error for anyone with an A/V amplifier or home theater system.
What causes lip sync error?
There are many causes but most boil down to the video signal being delayed more than the audio signal allowing speech to be heard 'before' the lip movement that produced it is seen.
Digital image processing within broadcasts and within modern displays delays video and allows audio to arrive too soon.
Sound "before" the action that produces it can never occur in nature and is therefore very disturbing when the brain tries to process this conflicting and impossible visual and aural information.
Most people initially only notice lip-sync error when it exceeds 40 to 75 ms but this varies enormously and really depends upon the individual's defence mechanism - how far he can look away from the moving lips so as to ignore the increasing lip-sync error. We call the value at which it is noticed consciously their "threshold of recognition".
An individual's "threshold of recognition" falls greatly once it has been reached and lip-sync error has been noticed. At that point their defence mechanism can no longer compensate and the sync problem enters their conscious mind. The same person who was never bothered by a 40 ms lip-sync error may, after noticing 120 ms error, become far more sensitive and notice errors only a small fraction of their previous "threshold". Many people can "see" lip-sync errors as small as "one milli-second" and some can even detect 1/3 ms errors.
How do you fix lip sync error?
The only way to correct lip-sync error caused by delayed video is to delay audio an equal amount.
The Felston DD740 digital audio delay solves lip-sync error by letting you add an audio delay to compensate for all the cumulative video delays - no matter what their cause - at the touch of a button on its remote.
It connects between four digital audio sources and your AV receiver (or digital speaker system) allowing you to delay the audio to match the video achieving "perfect lip-sync".
Unlike the audio delay feature found in most a/v receivers, the DD740 is designed for easy "on-the-'fly" adjustment while viewing with no image disturbance. This makes fine tuning for perfect lip-sync practical as it changes between programs or discs, and the DD740's 680ms delay corrects larger lip-sync errors common in HDTV.
Why doesn't HDMI 1.3+ fix this?
The widely misunderstood "automatic lip-sync correction" feature of HDMI 1.3 does nothing more than "automatically" set the same fixed delay most receivers set manually. It does nothing to correct a/v sync error already in broadcasts or discs which changes from program to program and disc to disc. Ironically, it can make lip-sync error "worse" when audio arrives delayed.
Does the Felston DD740 work with HD lossless audio found on Blu-ray discs?
No. The Felston DD740 is a S/PDIF coax/toslink device. Lossless audio such as DTS-HD Master Audio and Dolby TrueHD found on Blu-ray discs is only available over HDMI.
Does the Felston DD740 work with HD lossless audio found on Blu-ray discs?
We are not aware of any similar audio delay boxes that accept HDMI. There are other manufacturers of s/pdif delay boxes similar to the Felston DD740 but both are over twice the price and don't offer as many features (neither has numeric pad delay entry, 36 presets, or 1/3 ms adjustment). An HDMI delay box would need to an HDMI "repeater" (often called a splitter) since the HD Audio is HDCP encrypted along with the video. It would require an HDMI "receiver" chip (like TV's have) to decrypt the audio and video data and a "huge" memory to store it for delay but it would also require an HDMI "transmitter" chip (like a Blu-ray player has) to HDCP encrypt the re-aligned audio and video for output. If an HDMI delay box ever comes to market it will no doubt be expensive but like our other products we would offer it to our members at the best price on the internet.
Felston DD740 Features
680ms delay (340ms for 96kHz signals)
On-the-fly adjustment with no image overlay
Tweaking in 1ms and 1/3ms steps
36 preset delays for instant recall
Fully featured remote control with numeric keypad for discrete delay entry
Discrete input switching, with input's last delay restored
Automatic optical-to-coax/coax-to-optical conversion
4 digital audio inputs, 2 digital audio outputs (optical and coax)
Adjustable display brightness
Discrete IR commands for integration with learning remotes
No effect on audio quality thanks to bit-perfect reproduction
The Felston digital audio delays solve lip-sync error by allowing you to delay the digital audio signal to match the delay* in the video signal thereby restoring perfect lip-sync.
The delay unit is inserted in the digital audio path between your video source (DVD/Blu-ray disc player, DVR, etc.) and your AV receiver as in the diagram above. Since the DD740's "bit-perfect reproduction" does not change the digital audio signal, it is compatible with PCM and all present and future s/pdif surround sound formats at both 48 KHz and 96 KHz.
Since there is nothing in the video or audio signal to define when they are in sync it is a subjective adjustment and this is where the remote control excels. It remembers the last delay setting used on each input and includes 36 presets where common delays can be stored for instant recall. But most importantly, the + and - buttons allow dynamic "on-the-fly" delay adjustments while watching with no image disturbance - an essential feature allowing tweaking for "perfect-sync". These are necessary features for true lip-sync correction and not generally available on even the most expensive AV receivers that claim a lip-sync delay feature.
At first thought it might appear the DD740 audio delay could not correct for "already delayed audio in the arriving signal" but in conjunction with the video delay of your LCD, DLP, or plasma display it actually can - up to the display's video delay. That is, if your display delays video 100 ms your DD740 will correct lip-sync errors from 100 ms audio lagging to 580 ms audio leading.
* Normally lip-sync error is due to video delays in both the arriving signal as well as in the display allowing audio to arrive too soon but when broadcasters over-correct for the video delay they added the arriving signal might have audio delayed instead of video.
Using the DD740
In standby mode, audio passes through with no delay while coax to optical and optical to coax conversion remains active. When the DD740 is switched on, the signal is output with your last selected delay.
When you notice lip-sync error, correcting it is simply a matter of adding or subtracting audio delay. The plus and minus delay buttons allow adjustment in 1 ms steps (or even 1/3 ms). You adjust while watching your program and there is no image disturbance at all as you press the buttons and shift the audio into alignment with video.
As you use your DD740 you will notice that different sources, different discs, and different broadcasts require different delays for perfect lip-sync so the DD740 includes 36 delay presets (9 per input) to remember these commonly used settings making it easy to get to the optimum delay quickly.
It also features direct numeric entry so if you know the desired delay you just enter the numbers. That feature is even more valuable when used with programmable learning remote controls (e.g. Pronto, Harmony, URC, etc.) since it allows full control of the DD740 using its comprehensive discrete IR commands.
A/V receivers do not offer all these DD740 features but the most important and overriding advantage of the DD740 is the ease of delay adjustment while watching with no image overlays to disrupt your viewing.
With an A/V receiver that forces you to use a set-up menu overlaying your image every time you need to adjust the delay, perfect lip-sync just isn’t practical.
But my amplifier only has an optical input!
No problem. The DD740 transmits the selected source to both outputs simultaneously. This means an a/v amplifier with just one input (optical or coax) can be used with four digital audio sources (two coax and two optical).
Which types of audio signal can the DD740 delay?
In order to solve lip sync issues, the DD740 delays digital audio signals passing between your source equipment (e.g. disc player, set-top box) and your home theater amplifier via a DIGITAL AUDIO CABLE. Digital audio cable is either optical (toslink) or a coax cable fitted with a single RCA phono socket at each end.
NOTE: The DD740 is not directly compatible with ANALOG (stereo) audio signals. Analog audio signals use a pair of leads that connect to two RCA phono sockets (usually one with a red plastic insert and one white). However, if you use a home theater amplifier then analog sources can be used with the DD740 via an adaptor.
Can the DD740 delay DTS?
Yes. In fact the DD740 can delay any s/pdif digital audio (coax and optical) format that is used today, i.e. Dolby Digital, DTS, Dolby Digital EX, DTS 96/24, PCM, etc.
Does the DD740 reduce sound quality?
Absolutely Not. There is no change at all in the quality of audio when using a DD740, as the audio is being transmitted digitally. The DD740 simply stores the digital bits coming in and then outputs them, unchanged, after the delay period. Since the data is digital, a perfect copy is made with absolutely no deterioration in sound quality.
Can I use the DD740 with an analog (stereo) source?
Yes! All that is needed is a low cost, third-party analog-to-digital converter. This simply connects between the analog audio source and the DD740. Such a converter costs about US$40 (25GBP) or less depending on your location.
Please note, the output from the DD740 is still digital audio, and so you will still need an AV amplifier with a digital audio input or a speaker system that accepts s/pdif digital audio input.
How can I connect more than 4 digital audio sources?
If you need to connect more than 4 digital sources, or for example have 3 sources that each require optical connections, then third-party adapters are available.
For instance, to connect an optical (toslink) source to a coax input of the DD740, a simple optical-to-coax converter may be used. These are available at low prices both from online stores and at audio accessory shops in the high street. For example the unit shown on the right. It is widely available from audio accessory retailers, priced at approx US$30.
Alternatively, a powered digital audio switch may be used.
AVOID the use of mechanical toslink switches and splitters since they can reduce light levels and degrade the digital audio signals reaching the DD740 and may cause occasional dropouts in sound or not work at all. Toslink switches that do not require external power are definitely “mechanical” but remote controlled powered switches may also be mechanical internally. Powered switches that offer coax to Toslink and/or Toslink to coax conversion will not be mechanical and should work fine.
A suitable powered digital audio switch is the Midiman CO2, for example. It will connect to two digital sources, one coax and one optical. The CO2's output connects to any of the DD740's inputs, leaving the other three inputs available for a total of 5 sources. When the time comes to use one of the inputs connected via the CO2, simply move its switch to the source required.
How do I use a learning remote control with the DD740?
The DD740 includes features to allow extensive control by learning remotes.
What is the longest cable I can use with the DD740?
We recommend that, for best results, all cables (coax and optical) are kept to the shortest lengths practical. It is not possible to say exactly what the maximum length of cable is that may be used, since that will depend on the quality and condition of the cable and also on the equipment at the other end. However, as a guide a maximum length of 5 metres (15 feet) is advisable for any cable connected to the DD740.
In particular, we recommend that the DD740 is positioned near to your digital audio source and connected to it using short cables.
Problems that may occur if a cable is too long include audio drop outs (occasional short periods of silence) or loss of audio altogether.
Audio Delay Capabilities:
0 - 680 milliseconds in 1ms or 0.33ms steps (32-48kHz sample rate signals)
0 - 340 milliseconds in 1ms or 0.33ms steps (96kHz sample rate signals)
36 user-programmable presets (9 per input)
Remote control handset included? YES
Full functionality available from handset
May be integrated with learning remote controls, including additional discrete command codes
2 x Digital Audio In (coaxial) RCA phono socket (75 ohm)
2 x Digitial Audio In (optical) toslink socket
Digital Audio Out (coaxial) RCA phono socket (75 ohm)
Digital Audio Out (optical) toslink socket
DC power supply socket
9V DC (+ve center pole), 200mA from power adaptor
Less than 2 Watts for the DD740 from 9VDC. AC power consumption will depend upon the country specific power adaptor used with the unit but will not exceed 5 watts in any case.
Size: 5.7" (145mm) x 4.1" (105mm) x 1.4" (35mm)
Weight: 9.9oz (280g) approx
Integration with learning remotes
The DD740 is compatible with learning remote controls, providing seamless integration with your A/V system.
Every learning remote is capable of replicating the IR commands of the DD740's own remote control. Please refer to the instructions that accompany your learning remote for details of how to do this.
In addition, the DD740 has extra IR commands that can be programmed into more sophisticated learning remotes such as the Philips Pronto and ProntoNEO. By sequencing these commands, total control of the DD740 may be achieved. For example, turning on the DD740, selecting the input, selecting the delay preset and even setting its display brightness, all from a single button press on your learning remote.
The full set of IR commands available are:
Power On* (discrete)
Power Off* (discrete)
Digit 0-9 (discrete)
Input A-D (discrete)
Preset 1-5 (discrete)
Preset 6-9* (discrete)
Brightness 1-5* (discrete)
*Only available when using a suitable programmable learning remote control.
DD740 digital audio delay
2 x AAA batteries
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum You cannot attach files in this forum You can download files in this forum | 1 | 47 |
<urn:uuid:b789c5c0-7b8a-4b55-8230-92805ced4a38> | Kelso, Scottish Borders
|Scottish Gaelic: Cealsaidh|
Kelso seen from the Cobby Tweedside meadow
Kelso shown within the Scottish Borders
|OS grid reference|
|- Edinburgh||44 mi (71 km)|
|- London||350 mi (560 km)|
|Council area||Scottish Borders|
|Lieutenancy area||Roxburgh, Ettrick and Lauderdale|
|Sovereign state||United Kingdom|
|UK Parliament||Berwickshire, Roxburgh and Selkirk|
|Scottish Parliament||Ettrick, Roxburgh and Berwickshire|
Kelso (Scottish Gaelic: Cealsaidh, Scots: Kelsae) is a market town and civil parish in the Scottish Borders area of Scotland. It lies where the rivers Tweed and Teviot have their confluence. The parish has a population of 6,385.
Kelso's main tourist draws are the ruined Kelso Abbey and Floors Castle, a William Adam designed house completed in 1726. The bridge at Kelso was designed by John Rennie who later built London Bridge.
The town of Kelso came into being as a direct result of the creation of Kelso Abbey in 1128. The town's name stems from the fact that the earliest settlement stood on a chalky outcrop, and the town was known as Calkou (or perhaps Calchfynydd) in those early days.
Standing on the opposite bank of the river Tweed from the now-vanished royal burgh of Roxburgh, Kelso and its sister hamlet of Wester Kelso were linked to the burgh by a ferry at Wester Kelso. A small hamlet existed before the completion of the Abbey in 1128 but the settlement started to flourish with the arrival of the monks. Many were skilled craftsmen, and they helped the local population as the village expanded. The Abbey controlled much of life in Kelso-area burgh of barony, called Holydean, until the Reformation in the 16th century. After that, the power and wealth of the Abbey declined. The Kerr family of Cessford took over the barony and many of the Abbey's properties around the town. By the 17th century, they virtually owned Kelso.
In Roxburgh Street is the outline of a horseshoe petrosomatoglyph where the horse of Charles Edward Stuart cast a shoe as he was riding it through the town on his way to Carlisle in 1745. He is also said to have planted a white rosebush in his host's garden, descendants of which are still said to flourish in the area.
For some period of time the Kelso parish was able to levy a tax of 2 pence on every Scottish pint sold within the town. The power to do this was extended for 21 years in 1802 under the Kelso Two Pennies Scots Act when the money was being used to replace a bridge across the river Tweed that had been destroyed by floods.
Kelso High School provides secondary education to the town, and primary education is provided by Edenside Primary and Broomlands Primary.
The town has much sport and recreation, the River Tweed at Kelso is renowned for its salmon fishing, there are two eighteen-hole golf courses as well as a National Hunt (jumping) horse racing track, the course is known as "Britain's Friendliest Racecourse", racing first took place in Kelso in 1822.
In 2005 the town hosted the 'World Meeting of 2CV Friends' in the grounds of nearby Floors Castle. Over 7,000 people took over the town and are said[by whom?] to have brought in more than 2 million pounds to the local economy.
According to a letter dated October 17, 1788, 'The workmen now employed in digging the foundations of some religious houses which stood upon St. James' Green, where the great annual fair of that name is now held in the neighbourhood of this town, have dug up two sone [sic] coffins of which the bones were entire, several pieces of painted glass, a silver coin of Robert II, and other antique relics'.[unreliable source?]
The town's rugby union team (Kelso RFC) are highly respected, and their annual rugby sevens tournament takes place in early May. Famous former players include John Jeffrey, Roger Baird, Andrew Ker and Adam Roxburgh, all of whom featured in 7's teams that dominated the Borders circuit in the 1980s, including several wins in the blue ribbon event at Melrose. Kelso RFC also hold an annual rugby fixture; this fixture is the oldest unbroken fixture between a Scottish and Welsh side; the opposition is famous for being the birthplace of the Film Actor Richard Burton and Vocalist Ivor Emannuel is is small village nesttled in the beautiful South Wales Valleys called Pontrhydyfen. The fixture was founded some 47 years ago[when?] by Ian Henderson, a local Kelso businessman and Tom Owen fixture Secretary of Pontrhydyfen RFC. The two teams currently play for the DT Owen Cup; the two clubs alternate the fixture, one year they play in Kelso (the first fixture venue) and the following year in Pontrhydyfen. This fixture has nurtured generations of friendships and the 50th anniversary of this fixture will be held in 2013; this is unique, some[who?] claim to have the longest fixture between a Scottish and Welsh side, however this is the longest unbroken fixture.
Every year in July, the town celebrates the border tradition of Common Riding, known as Kelso Civic Week. The festival lasts a full week and is headed by the Kelsae Laddie with his Right and Left Hand Men. The Laddie and his followers visit neighbouring villages on horseback with the climax being the Yetholm Ride on the Saturday. There are many competitions and social events every day. There have been many songs written about Kelso (or Kelsae), (most notably "Kelsae Bonnie Kelsae") but the most recent one is "Yetholm Day", composed by Gary Cleghorn, a young follower of Civic Week for many years. The song tells the story of the Kelsae Laddie and his followers on the Saturday ride-out to Kirk Yetholm and Town Yetholm. Every September, Kelso hosts its annual fair every first weekend of September, the weekend includes Drinking, dancing, Street Entertainers, Live Music, Stalls , Free Music concert.The Fair attracts around over the whole weekend around 10,000 people to the town.
As a fund-raiser for Kelso Civic Week, Gary Cleghorn has involved Ex Laddies and locals to sing some of the old Kelso songs, plus some new songs by local artists, on a CD, "Songs of Kelso", which is sold in the town by local shops and public houses.
Sir Walter Scott attended Kelso Grammar School in 1783 and he said of the town, "it is the most beautiful if not the most romantic village in Scotland". Another attraction is the Cobby Riverside Walk which goes from the town centre to Floors Castle along the banks of the Tweed passing the point where it is joined by the River Teviot. Kelso has two bridges that span the River Tweed, "Rennie's Bridge" was completed in 1803 to replace an earlier one washed away in the floods of 1797, it was built by John Rennie of Haddington, who later went on to build Waterloo Bridge in London, his bridge in Kelso is a smaller and earlier version of Waterloo Bridge. The bridge was the cause of local rioting in 1854 when the Kelso population objected to paying tolls even when the cost of construction had been covered, the Riot Act was read, three years later tolls were abolished. Hunter's Bridge, a kilometre downstream, is a modern construction built to take much of the heavy traffic that has damaged Rennie's bridge by diverting vehicles around the town.
Famous people from Kelso have included civil engineer Sir James Brunlees (1816–1892) who constructed many railways in the United Kingdom as well as designing the docks at Avonmouth and Whitehaven. Sir William Fairbairn (1789–1874) was another engineer who built the first iron hulled steamship the Lord Dundas and constructed over 1000 bridges using the tubular steel method which he pioneered. Thomas Pringle the writer, poet and abolitionist, was born at nearby Blakelaw, a 500-acre (2.0 km2) farmstead four miles (6 km) to the south of the town where his father was the tenant.
Floors Castle
Floors castle is a large stately home just outside Kelso. It is a popular visitor attraction. Adjacent to the house there is a large walled garden with a cafe, a small garden centre and the Star Plantation.
Kelso is twinned with two cities abroad:
See also
- Mac an Tàilleir, Iain (2003) Placenames. (pdf) Pàrlamaid na h-Alba. Retrieved 20 January 2010.[dead link]
- Scots Language Centre: Scottish Place Names in Scots
- General Register Office for Scotland: Census 2001: Civil parish: Kelso Retrieved 23 March 2011
- Westwood, Jennifer (1985), Albion. A guide to Legendary Britain. Pub. Grafton Books. London. ISBN 0-246-11789-3. P. 378.
- The Law Commission & the Scottish Law Commission (2012), Statute Law Repeals: 19th Report (PDF), Law Commission, pp. 321–323, retrieved 2012-04-25
- Coin Hoard Article
Further reading
- Kelso Scottish Borders
- Kelso Songs
- Photos of Kelso
- Coin Hoard Article
- Media related to Kelso at Wikimedia Commons | 1 | 2 |
<urn:uuid:25ec666d-9f36-4a1e-95d6-7f96b2b578be> | The Voortrekker Road (H2-2):
The Voortrekker Road (H2-2) south-east of Pretoriuskop, was used by Carolus Trichardt, who was the son of the Voortrekker Louis Trichardt. He was commissioned in 1849 by the Transvaal Government of the time to open up a regular route between the northern interior and Delagoa Bay.
Albasiniís caravans were the main users of this road. Over the years his porters transported thousands of kilograms of goods from the coast and carried back loads of ivory.
The trader Fernades da Neves accompanied one of Albasiniís caravans in 1860 and reported that it took 24 days to complete the 250 mile journey between the coast and Pretoriuskop.
The trip employed 150 Tsongas each of which carried 40lb of trade goods, 68 porters carried food and camping equipment and 17 Elephant hunters kept guard.
The Vootrekker Road was improved in 1896 by the trader Alois Nelmapius to cater for the transport of supplies to Lydenburg and Mac Mac, where gold had been discovered. The road was used a lot by transport riders on their way to what was then known as Portuguese East Africa (today Mozambique).
The road descends from the mountainous Pretoriuskop sourveld into the rolling hills of mixed bushwillow woodland past a number of geological features. It is a good drive for if you want to get into the game rich plains south of Skukuza.
It was on this road that the little terrier Jock of the Bushveld was born. Jockís story was written by his master, Sir Percy Fitzpatrick, a former transport rider who became a politician and businessman in the early 20th century.
The main landmark on this road is Ship Mountain (662m), used as a navigational aid by early pioneers and travellers. It is geologically distinctive from the surrounding granite countryside because it is made up of gabbro, a hardier relative of basalt.
Per an old tale, Ship Mountain was a fort used by the Sotho people in the 18th century to protect themselves and their cattle against Swazi raiders coming in from the south. Sotho warriors would hide families and livestock in the caves on the top of Ship Mountain and then use rocks to stop the Swazi from getting to the top.
At the foot of Ship Mountain was a trading store run by Chief Manungu. He was part of Joao Albasiniís trading empire. Along this road you can still see the fence of the boma which was used during the first white Rhino relocation in Kruger Park in which 2 bulls and the 2 cows were released in October 1961.
In the 1960s 320 white Rhino were released into southern Kruger Park from the Umfolozi Game Reserve in KwaZulu-Natal, and a further 12 were released in the north. The Pretoriuskop area is one of the best locations to track and see white Rhino.
The Voortrekker Road roughly follows a line of thornveld to Afsaal. The significance of this is that the grass is more edible than the sourveld and the wildlife can easily be seen.
Even though there are not a lot of animals in this part of the Park, there are far higher numbers. By the 1900s almost all of the game had been shot out of the region by early hunters.
It was so bad that when Stevenson-Hamilton surveyed the area in 1902, he noted that the only wild animal he saw between Ship Mountain and Skukuza was a single Reedbuck.
Closest Rest Camps: | 1 | 2 |
<urn:uuid:37b03aa5-fd62-4141-b7d5-39e4fd6a71d2> | Forgot your username or password?
What do professional designers really do? This question needs to
be asked in order to answer why you need a design education and
what you need to study. The projects created by designers give form
to the communication between their client and an audience. To do
this, designers ask: What is the nature of the client? What is the
nature of the audience? How does the client want to be perceived by
the audience? Designers also explore the content of the message the
client wishes to send, and they determine the appropriate form and
media to convey that message. They manage the communication
process, from understanding the problem to finding the solution. In
other words, designers develop and implement overall communication
strategies for their clients.
Some of the projects presented here will probably seem familiar
because of their broad exposure in the media. Others, which are
limited to a particular audience, may surprise you. You'll see that
design arrests attention, identifies, persuades, sells, educates,
and gives visual delight. There is a streak of pragmatism in
American culture-our society tends to focus on results.
The processes that went into creating these design projects are
often invisible, but the designer's own words describe the
significant strategies. It's clear that some projects, because of
their size, would be inconceivable without considerable project
management skills. And the range of content clearly demonstrates
the designer's need for a good liberal arts education to aid in
understanding and communicating divers design content.
The projects that follow represent various media, such as print
(graphic design's historic medium) and three-dimensional graphic
design media, including environmental graphic design, exhibitions,
and signage. Electronic media, such as television and computers, as
well as film and video are also represented. Various kinds of
communication are included, from corporate communications to
publishing and government communications. Some project focus on a
specialization within design, such as corporate identity programs
or type design. Information design and interface design (the design
of computer screens for interactivity) reflect the contemporary
need to streamline information and to use new media
Three designer roles are also highlighted. Developed over a
lifetime, these careers go beyond the commonly understood role of
the designer. The corporate executive oversees design for a large
company; the university professor teaches the next generation of
designers and thus influences the future of the field; the design
entrepreneur engages in design initiation as an independent
business. Consider these and other design-related roles as you plan
your studies and early job experience.
The projects and designers presented her were selected to
illustrate the range of graphic design activities and to represent
the exceptional rather than the ordinary. Seeing the best can give
you a glimpse into the possibilities that await you in the
competitive, creative, and rewarding field of design.
[ top ]
Digital design is the creation of highly manipulated images on the
computer. These images then make their final appearance in print.
Although computers have been around since the forties, they were
not reasonable tools for designers until the first Macintoshes came
out in 1984. April Greiman was an early computer enthusiast who
believes that graphic design has always been involved with
technology. After all, Gutenberg's fifteenth-century invention of
movable type created a design as well as an information
Greiman's first interest was video, which led naturally to the
computer and its possibilities. She bought her first Mac as a toy,
but soon found it an indispensable creative tool. “I work
intuitively and play with technology,” says Greiman. “I like
getting immediate feedback from the computer screen, and I like to
explore alternative color and form quickly on-screen. Artwork that
exists as binary signals seems mysterious to me. It is an
exhilarating medium!” She wants to design everything and to control
and play with all kinds of sensory experience.
Designers working with digital design need to be more than
technicians. Consequently, their studies focus on perception,
aesthetics, and visual form-making as well as on technology.
I didn't have the math skills (so I thought) to become an
architect. My high school training in the arts was in the
“commercial art” realm. Later at an art school interview I was told
I was strong in graphic design. So as not to humiliate myself, not
knowing what graphic design was, I just proceeded onwards?the
“relaxed forward-bent” approach, my trademark! -April
The book remains our primary way of delivering information. Its
form has not changed for centuries, and its internal
organization-table of contents, chapter, glossaries, and so
forth-is so commonplace that we take it for granted. But now a
challenger has appeared: the computer. No longer merely a tool for
preparing art for the printer, the computer is an information
medium in itself.
Computer-based design delivers information according to the
user's particular interest. Information is restructured into webs
that allow entry from different points, a system that may be more
like our actual thinking processes than the near order of the book
is. On the computer, the designer can use time and sound in
addition to text and image to draw attention, to animate an
explanation, or to present an alternative way to understand a
concept. This new technology demands designers who can combine
analysis with intuition. Clement Mok does just that. He is a
certified Apple software developer (he can program) and a graphic
designer comfortable in most media. QuickTime system software,
recently released by Apple, supports the capability to do digital
movies on the Macintosh. As system software, it is really
invisible. “Providing users with this great technology isn't
enough,” says Mok. “You also have to give them ideas for what they
can do and samples they can use.”
Mok addressed this problem by developing QuickClips, a CD
library of three hundred film clips ranging from excerpts of
classic films to original videos and animations created by his
staff. These fifteen- to ninety-second movies can be incorporated
into user-created presentations. It is like having a small video
store in your computer. With QuickClips, Mok opened new avenues for
presentation with the computer.
It is easy to overlook type design because it is everywhere.
Typically we read for content and ignore the familiar structural
forms of our alphabet and its formal construction in a typeface.
Only when the characters are very large, or are presented to us in
an unusual way, do we pay attention to the beautiful curves and
rhythms of repetition that form our visible language.
Since Gutenberg's invention of movable type in the mid-fifteenth
century, the word has become increasingly technological in its
appearance. Early type was cast in metal, but today's new type
design is often created digitally on the computer through a
combination of visual and mathematical manipulation.
The history of culture can be told through the history of the
letterform. The lineage of many typefaces can be traced back to
Greek inscriptions, medieval scribal handwriting, or early movable
type. Lithos, which means “stone” in Greek, was designed by Carol
Twombly as a classically inspired typeface. She examined Greek
inscription before attempting to capture the spirit of these
letterforms in a type system for contemporary use.
Lithos was not an exact copy from history nor was it created
automatically on the computer. Hand sketches, settings that used
the typeface in words and sentences were developed and evaluated.
Some were judged to be too stiff, some “too funky,” but finally one
was just right. These were the early steps in the search for the
form and spirit of the typeface. Later steps included controlling
the space between letters an designing the variations in weight for
a bold font. Twombly even designed foreign-language variations.
Clearly, patience and a well-developed eye for form and system
are necessities for a type designer.
As a kid, when I wasn't climbing trees, skiing, or riding
horses, I was drawing and sculpting simple things. I wanted a
career involving art of some kind. The restrictions of
two-dimensional communication appealed to my need for structure and
my desire to have my work speak for me. The challenge of
communicating an idea or feeling within the further confines of the
Latin alphabet lad me from graphic design into type design. -Carol
Most people have had the experience of losing themselves in a film
but probably haven't given much thought to the transition we go
through mentally and emotionally as we move from reality to
fantasy. Film titles help to create this transition. The attention
narrows, the “self” slips away, and the film washes over the
senses. Film titles set the dramatic stage; they tune our emotions
to the proper pitch so that we enter into the humor, mystery, or
pathos of a film with hardly a blink.
Rich Greenberg is a traditionally schooled designer who now
works entirely in film. His recent Dracula titles are a
classic teaser. He begins with the question: What is this film
about? Vampires. What signals vampires for most of us? Blood.
Greenberg believes that a direct approach using the simplest idea
is usually the best. “What I do in film is the opposite of what is
done with the print image. Dracula is a very good example of
the process. There is very little information on the screen at any
time, and you let the effect unfold slowly so the audience doesn't
know what they're looking at until the very end. In print,
everything has to be up front because you have so little time to
get attention. In film you hold back; otherwise it would be boring.
The audience is captive at a film-I can play with their minds.”
Special effects are also of interest to Greenberg. In
Predator, the designer asked, How can I create a feeling of
fear? He began by exploring the particular possibilities for horror
that depend on a monster's ability to camouflage himself so he
seems to disappear into the environment. The designer's visual
problem was to find a way for the object to be there and not be
there. It was like looking into the repeating, diminishing image in
a barber's mirror. To complicate matters, the effect needed to work
just as well when the monster was in motion.
Whether designing opening title or special effects that will
appear throughout a film, designers have to keep their purpose in
mind. According to Greenberg, “Nobody goes to a film for the
effects; they go for the story. Effects must support the
Motion graphics, such as program openings or graphic demonstrations
within a television program, require the designer to choreograph
space and time. Images, narration, movement, sound and music are
woven into a multisensory communication.
Chris Pullman at WGBH draws an analogy between creating a
magazine with its cover, table of contents, letters to the editor,
and articles, to that of a television program like Columbus and
the Age of Discovery. In both cases, the designer must find a
visual vocabulary to provide common visual features.
Columbus opens slowly and smoothly, establishing a time and
a place. A ship rocking on the waves becomes a kind of “wallpaper”
on which to show credits. The opening is a reference to what
happened—it speaks of ships, ocean, New World, Earth—without
actually telling the story.
In contrast, the computer-graphic map sequences are technical
animation and a critical part of the storytelling. Was Columbus
correct in his vision of the landmass west of Europe? Something was
there, but what and how big? Was it the Asian landmass Columbus had
promised to find? In 1516, Magellan sailed around the Americas by
rounding Cape Horn-and found 5,000 more miles of sea travel to
Japan! Columbus had made a colossal miscalculation.
The designer needed to visualize this error. Authentic ancient
maps established the perspective of the past; computer animation
provided the story as we understand it today and extended the
viewer's perspective with a three-dimensional presentation. Pullman
created a 3-D database with light source and ocean detail for this
fifty-seven-second sequence. “The move was designed to follow the
retreating edge of darkness, as the sun revealed the vastness of
the Pacific Ocean and the delicate track of Magellan's expedition
snaked west. As the Pacific finally fills the whole frame, the
music, narration, and camera work conspire to create that one
goose-bump moment. In video, choreography, not composition, is the
Objects, statistics, documentary photographs, labels, lighting,
text and headlines, color, space, and place—these are the materials
of exhibition design. The designer's problem is how to frame these
materials with a storyline that engages and informs an audience and
makes the story come alive. The Ellis Island Immigration Museum
provides an example of how exhibition designers solve such a
The museum at Ellis Island honors the many thousand of
immigrants who passed through this processing center on their way
to becoming United States citizens. It also underscores our
diversity as a nation. The story is told from two perspectives: the
personal quest for a better life, which focuses on individuals and
families, and the mass migration itself, a story of epic
Tom Geismar wanted to evoke a strong sense of the people who
moved through the spaces of Ellis Island. In the entry to the
baggage room, he used space as a dramatic device to ignite the
viewer's curiosity. Using a coarse screen like that used in old
newspapers, Geismar enlarged old photographs to life size and then
mounted these transparent images on glass. The result is an open
space in which ghostly people from the past seem to appear.
The problem of how to dramatize statistical information was
another challenge. The exhibit Where We Came From: Sources of
Immigration uses three-dimensional bar charts to show the
number of people coming from various continents in twenty-year
intervals; the height of the vertical element signals volume.
The Peopling of America, a thematic flag of one thousand
faces, shows Americans today. The faces are mounted on two sides of
a prism; the third side of the prism is an American flag. This
striking design becomes a focal point for the visitor and is
retained as a powerful memory.
Exhibit design creates a story in space. Designers who work in
this field tend to enjoy complexity and are skilled in composition
and visual framing, model making, and the use of diagrams,
graphics, and maps.
Even as an adolescent, I was interested in “applied art.” I
was attracted to the combination of “art” (drawing, painting, etc.)
and its practical application. While there was no established
profession at the time (or certainly none that I knew of), my eyes
were opened by the Friend-Heftner book Graphic Design and my
taste more fully formed under a group of talented teachers in
graduate school. I still enjoy the challenge of problem solving.
As people become more mobile-exploring different countries, cities,
sites, and buildings—complex signage design helps them locate their
destinations and work out a travel plan. One large and multifaceted
tourist attraction that recently revamped its signage design is the
world-famous Louvre Museum, in Paris, France. In addition to the
complexity of the building and its art collection, language and
cultural differences proved to be fundamental design problems in
developing a signage system for the Louvre.
Carbone Smolan Associates was invited to compete for this
project sponsored by the French government. In his proposal, Ken
Carbone emphasized his team's credentials, their philosophy
regarding signage projects, and their conceptual approach to
working on complex projects. Carbone Smolan Associates won the
commission because they were sensitive to French culture, they were
the only competitor to ask questions, and their proposal was unique
in developing scenarios for how museum visitors would actually use
the signage system.
The seventeenth-century Louvre, with its strikingly modern
metal-and-glass entryway designed in the 1980s, presented a visual
contrast of classicism and modernity. Should the signage harmonize
with the past or emphasize the present? The design solution
combines Granjon, a seventeenth-century French typeface, with
The signage design also had to address an internal navigational
problem: how would visitors find their way through the various
buildings? To add to the potential confusion, art collections are
often moved around within the museum. The designers came up with an
innovative plan: they created “neighborhoods” within the Louvre,
neighborhoods that remained the same regardless of the collection
currently in place. The signage identified the specific
neighborhoods; the design elements of a printed guide (available in
five languages) related each neighborhood to a particular Louvre
environment. It's clear that signage designers need skills in
design systems and planning as well as in diagramming and model
Design simply provided the broadest range of creative
opportunities. It also appealed to my personal interest in two- and
three-dimensional work including everything from a simple poster to
a major exhibit. -Ken Carbone
Packaging performs many functions: it protects, stores, displays,
announces a product's identity, promotes, and sometimes instructs.
But today, given increased environmental concern and
waste-recycling needs, packaging has come under scrutiny. The
functions packaging has traditionally performed remain; what is
needed now is environmentally responsive design. Fitch Richardson
Smith developed just such a design-really an “un-packaging”
strategy-for the Gardenia line of watering products.
A less-is-more strategy was ideally suited to capture the
loyalty of an environmentally aware consumer-a gardener. The
designers' approach was to eliminate individual product packaging
by using sturdy, corrugated, precut shipping bins as
point-of-purchase displays. Hangtags on individual products were
designed to answer the customer's questions at point-of-sale and to
be saved for use-and-care instructions at home. This approach cut
costs and reduced environmental impact in both manufacturing and
consumption. What's more, Gardena discovered that customers liked
being able to touch and hold the products before purchase.
Retailers report that this merchandising system reduces space
needs, permits tailoring of the product assortment, and minimizes
the burden on the sales staff. A modular system, it is expandable
and adaptable and can be presented freestanding or on shelves or
pegboards. The graphics are clear, bright, and logical, reinforcing
the systematic approach to merchandising and information design.
Contemporary environmental values are clearly expressed in this
packaging solution. The product connects with consumers who care
about their gardens, and the packaging-design solution relates to
their concern about the Earth.
Package designers tend to have a strong background in
three-dimensional design, design and product management, and design
Environmental graphics establish a particular sense of place
through the use of two- and three-dimensional forms, graphics, and
signage. The 1984 Olympics is an interesting example of a project
requiring this kind of design treatment. The different
communication needs of the various Olympics participants—athletes,
officials, spectators, support crews, and television
viewers—together with the project's brief use, combined to create
an environmental-design problem of daunting dimension and
In the 1984 Los Angeles Games, the focus was on how a
multicultural American city could embrace and international event.
Arrangements were basic and low-budget. Events, planned to be
cost-conscious and inclusive, were integrated into Los Angeles
rather than isolated from it. Old athletic stadiums were
retrofitted rather than replaced with new ones. These ideas and
values, as well as the celebratory, international nature of the
Olympics, needed to be expressed in its environmental design.
One of the most important considerations was to design a visual
system that would provide identity and unity for individual events
that were scattered throughout an existing urban environment.
Through the use of color and light, the visual system highlighted
the geographic and climatic connection between Los Angeles and the
Mediterranean environment of the original Greek Games. The graphics
expressed celebration, while the three-dimensional physical forms
were a kind of “instant architecture”—sonotubes, scaffolding, and
existing surfaces were signed and painted with the visual system.
The clarity and exuberance of the system brought the pieces
together in a cohesive, immediately recognizable way.
Under the direction of Sussman/Prejza, the design took form in
workshops and warehouses all over the city. Logistics—the physical
scope of the design and the time required for its development and
installation—demanded that the designers exhibit not only skill
with images, symbols, signs, and model making but also considerable
Strategic design planners are interested in the big picture. They
help clients create innovation throughout an industry rather than
in one individually designed object or communication. First, the
strategic design planners develop a point of view about what the
client needs to do. Then they orchestrate the use of a wide variety
of design specialties. The end result integrates these specialties
into an entire vision for the client and the customer. This
approach unites business goals, such as customer satisfaction of
increasing market share, with specific design performance.
The scope of strategic design planning is illustrated by one
Doblin Group project. Customer satisfaction was the goal of the
Amoco Customer-Driven Facility. Larry Keeley, a strategic design
planner at the Doblin Group, relates that “the idea was to
reconceive the nature of the gas station. And like many design
programs, this one began with a rough sketch that suggested how gas
stations might function very differently.” The design team needed
to go beyond giving Amoco a different “look.” They needed to
consider customer behavior, the quality of the job for employees,
the kinds of fuel the car of the future might use, and thousands of
other details. Everything was to be built around the convenience
and comfort of customers.
Keeley and his team collaborated with other design and
engineering firms to analyze, prototype, and pilot-test the design.
The specific outcomes of the project include developments that are
not often associated with graphic design. For example, the project
developed new construction materials as well as station-operation
methods that are better for the environment and the customer. A gas
nozzle that integrated the display of dispensed gas with a
fume-containment system was also developed. This system was
designed to be particularly user-friendly to handicapped or elderly
customers. For Amoco itself, software-planning tools were developed
to help the company decide where to put gas stations so that they
become good neighbors. These new kinds of gas stations are now in
operation and are a success.
Creating a visual system is like designing a game. You need to ask:
What is the purpose? What are the key elements and relationships?
What are the rules? And where are the opportunities for surprise?
With over 350 national parks and millions of visitors, the United
States National Park Service (NPS) needed a publication system to
help visitors orient themselves no matter which park they were in,
to understand the geological or historical significance of the
park, and to better access its recreational opportunities. The
parts of the system had to work individually and as a whole.
Systems design involves considerations of user needs,
communication consistency, design processes, production
requirements, and economies of scale, including the standardization
of sizes. Rather than examining and designing an isolated piece,
the designer of a system considers the whole, abstracting its
requirements and essential elements to form a kind of game plan for
the creation of its parts. When Massimo Vignelli was hired to work
with the NPS design staff, they agreed on a publication system with
six elements: a limited set of formats; full-sheet presentations;
park names used as logotypes; horizontal organization for text,
maps, and images; standardized, open, asymmetric typographic
layout; and a master grid to coordinate design with printing. The
system supports simple, bold graphics like Liberty or
detailed information like Shenandoah Park, with its relief
map, text, and photographs.
A well-conceived system is not a straightjacket; it leaves room
for imaginative solutions. It releases the designer from solving
the same problem again and again and directs creative energy to the
unique aspects of a communication. To remain vital and current, the
system must anticipate problems and opportunities. Designers
working in this area need design-planning skills as well as
creativity with text, images, symbols, signs, diagrams, graphs, and
Educational publishing isn't just textbooks anymore. Traditional
materials are now joined by a number of new options. Because
children and teenagers grow up with television and computers, they
are accustomed to interactive experiences. This, plus the fact that
students learn best in different ways—some by eye and some by
ear—makes educational publishing an important challenge for
Ligature believes that combining visual and verbal learning
components in a cooperative, creative environment is or paramount
importance in developing educational materials. Ligature uses
considerate instructional design, incorporating fine art,
illustrations, and diagrams, to produce educational products that
are engaging, substantive, relevant, and effective.
A Ligature project for a middle school language arts curriculum
presents twelve thematic units in multiple ways: as a full-color
magazine, a paperback anthology, an audiotape, several videotapes,
a language arts survival guide giving instruction on writing,
software, fine art transparencies, and a teacher's guide containing
suggestions for integrating these materials. These rich learning
resources encourage creativity on the part of both teachers and
students and allow a more interactive approach to learning.
Middle school students are in transition from child to adult.
The central design issue was to create materials that look youthful
but not childish, that are fresh, fun, and lively, yet look “grown
up.” The anthology has few illustrations and looks very adult,
while the magazine uses type and many lively images as design
In educational publishing, multidisciplinary creative teams use
prototype testing to explore new ideas. Materials are also
field-tested on teachers and students. Designers going into
instructional systems development need to be interested in
information, communication, planning, and teamwork.
What makes you pick up a particular magazine? What do you look at
first? What keeps you turning the pages? In general, your answers
probably involve some combinations of content (text) and design
(images, typography, and other graphic elements). Magazine
designers ask those same questions for every issue they work on;
then they try to imagine the answers of their own particular
audience-their slice of the magazine market.
At Rolling Stone, designers work in conjunction with the
art director, editors, and photo editors to add a “visual voice” to
the text. They think carefully about their audience and use a
variety of images and typefaces to keep readers interested. “We try
to pull the reader in with unique and lively opening pages and
follow through with turnpages that have a good balance of photos
and pullquotes to keep the reader interested,” says deputy art
director Gail Anderson. Designers also select typefaces that
suggest the appropriate mood for each story.
The designers work on their features from conception to
execution, consulting with editors to help determine the amount of
space that each story needs. They also work with the copy and
production departments on text changes, letterspacing, type, and
the sizing of art. At the beginning of the two-week cycle,
designers start with printouts of feature stories. They select
photographs and design a headline. Over the course of the next two
to three days, they design the layouts. At the same time, each
Rolling Stone designer is responsible for one or more of the
magazine's departments and lays out those pages as well. Eventually
both editors and designers sign off on various stages of the
production process and examine final proofs.
Anderson is excited about how the new technology has changed the
role of magazine designers. “We now have the freedom to set and
design type ourselves, to experiment with color and see the results
instantly, and to work in what feels like 3-D. The designer's role
has certainly expanded, and I think it is taken more seriously than
it was even a few years ago.” Magazine designers should enjoy
working with both type and images, be attuned to content concerns
and able to work well with editors, have technological expertise,
and be able to tolerate tight deadlines.
Drawing—deciding what is significant detail, what can be suggested,
and what needs dramatic development—is a skill that all designers
need in order to develop their own ideas and share them with
others. Many designers use drawing as the core of their work.
Milton Glaser is such a designer.
Keeping a creative edge and searching for new opportunities for
visual development are important aspects of a lively design
practice. When Glaser felt an urge to expand his drawing vocabulary
and to do more personally satisfying work, he found himself
attracted to the impressionist artist Claude Monet. Glaser liked
the way Monet looked: his physical characteristics expressed
something familiar and yet mysterious. Additionally, Monet's visual
vocabulary was foreign to Glaser whose work is more linear and
graphic. While many designers would be intimidated by Monet's
stature in the art world, Glaser was not because he was consciously
seeking an opportunity for visual growth. In a sense, Glaser's
drawings of Monet were a lark-an invention done lightly.
Glaser worked directly from nature, from photographs, and from
memory in order to open himself to new possibilities. The drawings,
forty-eight in all, were done over a year and a half and then were
shown in a gallery in Milan. They became the catalog for a local
printer who wanted to demonstrate his color fidelity and excellence
in flexibility of vision: the selection of detail, the balancing of
light and shadow, and the varying treatments of figure and
Drawing is a rich and immediate way to represent the world, but
drawing can also illustrate ideas in partnership with design.
Creating the key graphic element that identifies a product or
service and separates it from its competitors is a challenging
design problem. The identity needs to be clear and memorable. It
should be adaptable to extreme changes in scale, from a matchbox to
a large illuminated sign. And it must embody the character and
quality of what it identifies. This capturing of an intangible is
an important feature of identity design, but it is also a subtle
Hotel Hankyu International is the flagship hotel for the Hankyu
Corporation, a huge, diversified Japanese company. It is relatively
small for a luxury hotel, with only six floors of accommodations.
The client wanted to establish the hotel as an international hotel,
rather than a Japanese hotel. In Japan, “international” mean
European or American. Consequently, the client did not look to
Japanese designers, but they hired Pentagram-with the understanding
that the hotel's emblem would be a flower, since flowers are
universally associated with pleasure.
The identity was commissioned first, before other visual
decisions (such as those about the interior architecture) were
made. Here the graphic designer could set the visual agenda. Rather
than one flower, six flowers were designed as the identity, one for
each floor. To differentiate itself in its market, this small
luxury hotel benefited from an extravagant design. Each flower is
made up of four lines that emerge from the base of a square. The
flowers are reminiscent of the 1920s Art Deco period, which suggest
sophistication and world travel. Color and related typefaces link
the flowers. One typeface is a custom-designed, slim Roman alphabet
with proportions similar to those of the flowers. The other
consists of Japanese characters and was designed by a Japanese
The identity appears on signage, room folder, stationery,
packaging, and other hotel amenities. It is clear and memorable and
conveys a sense of luxury. Designers working with identity design
need to be skilled manipulators of visual abstraction, letterforms,
and design systems.
Systems design seeks to unify and coordinate all aspects of a
complex communication. It strives to achieve consistent verbal and
visual treatment and to reduce production time and cost. Systems
design requires a careful problem-solving approach to handling
Caterpillar Inc. is a worldwide heavy-equipment and engine
manufacturer. Its most visible and highly used document is the
Specalog, a product-information book containing
specifications, sales and marketing information, and a
competitive-product reference list. A Specalog is produced
for each of fifty different product types into twenty-six
languages. The catalog output totals seventy million pages
annually. Before Siegel & Gale took on Specalog, no
formal guidelines existed, so the pages took too much time to
create and were inconsistent with Caterpillar's literature strategy
and corporate image.
Bringing systematic order and clarity to this mountain of
information was Siegel & Gale's task. First they asked
questions: What do customers and dealers need to know? What do the
information producers (Caterpillar's product units) want? An
analysis of existing Specalogs revealed problems with both
verbal and visual language: there was no clear organization for
content; language was generic; product images were taken from too
great a distance; and specifications charts lacked typographic
clarity. The brochures of Caterpillar's competitors were also
analyzed so as not to miss opportunities to make Specalog
distinctive. These activities resulted in a clear set of design
A working prototype was tested with customers and dealers.
Following revisions, the new design was implemented worldwide. Its
significant features include an easy-to-use template system
compatible with existing Macintosh computers (thus allowing for
local-market customization), a thirty-percent saving in production
time and cost, and increased approval by both customers and
dealers. Achieving standardization while encouraging customization
is a strategy in many large international organizations. Designers
involved with projects like this study information design, design
planning, and evaluation techniques.
Designers are problem solvers who create solutions regardless
of the medium. But, designers create within the confines of
reality. The challenge is to push the limits of reality to achieve
the most effective solution. -Lorena Cummings
Whether they are large or small, corporations need to remind their
public who they are, what they are doing, and how well they are
doing it. Even the venerable Wall Street banking firm of J.P.
Morgan needs to assert itself so the public remembers its existence
and service. Corporate communications serve this function, and the
design of these messages goes a long way toward establishing
Usually corporate communications include identity programs and
annual reports, but there are also other opportunities to
communicate the corporate message. Since 1918, J.P. Morgan has
published a unique guide that keeps up with the changing world of
commerce and travel. The World Holiday and Time Guide covers
over two hundred countries, and keeps the traveler current with
twenty-four time zones. In the Guide, the international
businessperson can find easy-to-read tables and charts giving the
banking hours as well as opening and closing business times for
weekdays and holidays. Specific cultural holidays, such as Human
Rights Day (December 10) in Namibia and National Tree Planting Day
(March 23) in Lesotho are included. The seventy-five-year history
of the Guide is also an informal chronicle of world change.
It has described the rise and decline of Communism and the
liberation of colonial Africa and Asia; today it keeps up with the
recent territorial changes in Europe. The covers of the Guide
invite the user to celebrate travel and cultural diversity; the
interior format is a model of clarity sand convenience.
In-house design groups have two functions: they provide a design
service for their company and they maintain the corporate image.
Because projects are often annual, responsibility for them moves
around the design group, helping to sustain creativity and to
generate a fresh approach to communication. Consequently, the
Guide is the work of several designers. To work in corporate
communications, designers need skills relating to typography,
information design, and print design.
My early exposure to a design studio made me aware of the
design profession as an opportunity to apply analytical abilities
to an interest in the fine arts. Graduate design programs made it
possible for me to delve more deeply into the aspects of design I
found personally interesting. Since then, the nature of the design
profession, which constantly draws the designer into a wide range
of subjects and problems, has continued to interest me in each new
project. It's been this opportunity to satisfy personal interests
while earning a living that ha made design my long-term career
choice. -Won Chung
Just as profit-oriented corporations need to present a carefully
defined visual identity to their public, so must a nonprofit
organization like the Walker Art Center. Even with limited
resources, this museum uses graphic designers to present its best
face to the public.
For twenty years the Walker Art Center presented itself in a
quiet, restrained, and neutral manner. It was a model of
contemporary corporate graphics. But times change, and like many
American museums, the Walker is now taking another look at its role
in society. The questions the Walker is considering include: What
kind of museum is this? Who is its audience? How does the museum
tell its story to its audience? What should its visual identity and
publications look like? Identity builds expectation. Does the
identity established by the museum's communications really support
the programs the Walker offers?
The stock-in-trade of the Walker Art Center includes exhibitions
and the performing arts for audiences ranging from children to
scholars, educational programs, and avant-garde programming in film
and video. As the museum's programming becomes even more varied,
the old “corporate” identity represented by a clean, utilitarian
design no longer seems appropriate. To better represent the
expanded range of art and audience at the museum, The Design
Studio, an internal laboratory for design experimentation at the
Walker, is purposely blurring aspect of high and low culture and
using more experimental typefaces and more eclectic communication
approaches. Posters, catalogs, invitations to exhibitions, and
mailers for film and performing art programs often have independent
design and typographic approaches, while the calendar and members'
magazine provide a continuity of design.
Publication design, symbol and identity systems, and type and
image relationships are among the areas of expertise necessary for
in-house museum designers.
I like the way words look, the way ideas can become things. I
like the social, activist, practical, and aesthetic aspects of
design. -Laurie Haycock Makela
How do you get around in an unfamiliar city? What if the language
is completely different from English? What kind of guidebook can
help you bridge the communication gap? Access Tokyo is a
successful travel guide to one of the most complex cities in the
world. It is also an example of information design, the goal of
which is clarity and usefulness.
Richard Wurman began the Tokyo project as an innocent, without
previous experience in that city. His challenge was to see if he
could understand enough about Tokyo to make major decisions about
what to include in a guidebook. He also needed to develop useful
instruction to help the English-speaking tourist get around.
Ignorance (lack of information) and intelligence (knowing how to
find that information) led him to ask the questions that brought
insight and order to his project. Using his skill in information
and book design, the designer used his own experience as a visitor
to translate the experience of Tokyo for others.
Access Tokyo presents the historical, geographical, and
cultural qualities that make Tokyo unique, as well as resources and
locations for the outsider. Maps are a particular challenge since
they require reducing information to its essential structure. The
map for the Yamanote Line, a subway that rings Tokyo, is clear and
memorable. The guide is bilingual because of the language gap
between English with it Roman alphabet and Japanese with its
ideographic signs. The traveler can read facts of interest in
English but can also show the Japanese translation to a cab
Wurman also wanted to get the cultural viewpoint across. To this
end, he asked Japanese architects, painters, and designers to
contribute graphics to the project. The colorful tangram (a puzzle
made by cutting a square of paper into five triangles, a square,
and a rhomboid) is abstract in a very Japanese way. Access
Tokyo bridges the culture chasm as well as the information
“One who organizes, manages, and assumes the risks of a business
enterprise?” This dictionary definition of the word
entrepreneur is a bland description of a very interesting
possibility in design. A design entrepreneur extends the general
definition: he or she must have a particular vision of an object
and its market. While many designers believe they could be their
own best client, few act on this notion. Tibor Kalman of M&Co.
acted: he was a design entrepreneur.
Kalman's firm, M&Co, was not without clients in the usual
sense. Their innovative graphics for the Talking Heads music video
“(Nothing but) Flowers” demonstrates that creativity and even fun
are possible in traditional design work for clients. But somehow
this wasn't enough for Kalman. He was frustrated with doing the
packaging, advertising, and promotion for things he often viewed
critically. He wanted to do the “real thing,” the object
Kalman started with a traditional object, a wristwatch. He then
applied his own particular sense of humor and elegant restraint to
the “ordinary” watch in order to examine formal ideas about time.
The Pie watch gives only a segment, or slice, of time, while
the Ten One 4 wristwatch is such a masterpiece of
understatement that it is in the permanent design collection of the
Museum of Modern Art. Other variations include Romeo (with
Roman numeral rather that Arabic ones), Straphanger (with
the face rotated ninety degrees to accommodate easy reading on the
subway), and Bug (with bugs substituted for the usual
numerals). These few examples give a sense of the wry humor that
transforms an ordinary object into a unique personal pleasure.
Entrepreneurial design requires creativity and business savvy
along with design and project-management skills. Of course, an
innovative concept is also a necessity. Vision and risk-taking are
important attributes for the design entrepreneur.
I became a designer by accident; it was less boring than
working in a store. I do have some regrets, however, as I would
prefer to be in control of content rather form. -Tibor
If you are a take-charge person with vision, creativity, and
communication and organizational skills, becoming a design
executive might be a good long-term career goal. Obviously, no one
starts out with this job; it takes years to grow into it. A brief
review of Robert Blaich's career can illustrate what being a design
executive is all about.
Educated as an architect, Blaich became involved with marketing
when he joined Herman Miller, a major American furniture maker.
Then he assumed a product-planning role and began to consciously
build design talent for the organization. By the time he was vice
president of design and communication, Blaich was running Herman
Miller's entire design program (including communication, product,
and architectural design). In a sense, he was their total
In 1980 Blaich came to Philips Electronics N.V., an
international manufacturer of entertainment and information
systems. Located in the Netherlands, Philips is the world's
twenty-eighth largest corporation and was seen by many Americans as
a stodgy foreign giant. The president of Philips asked Blaich to
take the corporation in new directions. By the time Blaich left in
1992, design was a strategic part of Philips's operation and its
dull image was reinvigorated and unified. What's more, the
corporation now saw its key functions as research, design,
manufacturing, marketing, and human resources-in that order.
Design's number-two position reflected a new understanding of its
importance. Today, as president of Blaich Associates, Blaich is a
consultant for Philips and responsible for corporate identity and
for strategic notions of design.
Just what do corporate design executives do? They look at design
from a business point of view, critique work, support new ideas,
foster creativity and collaboration, bring in new talent, and
develop new design capabilities. They are design activists in a
The best teaching is about learning, exploring, and making
connections. Teachers in professional programs are almost never
exclusively educators; they also practice design. Sheila Levrant de
Bretteville is a case in point. She is a professor of graphic
design at Yale University and owner of The Sheila Studio. Both her
teaching and design are geared toward hopeful and inspiring
Looking at a student assignment and at one of de Bretteville's
own design projects illuminates the interplay of teaching and
practice. De Bretteville saw the windows of abandoned stores in New
Haven as an opportunity to communicate across class and color
lines. She chose the theme of “grandparents,” which formed a
connection between her Yale students and people of the community.
The windows became large poster that told stories of grandparents
as immigrants, as labor leaders, as the very aged, and more. The
project gave students the opportunity to explore the requirements
of space, materials, and information.
De Bretteville's project, Biddy Mason: Time and Place, is
an example of environmental-design. Located in Los Angeles, it
explores the nine decades of Biddy Mason's life: as a slave prior
to her arrival in California and as a free woman in Los Angeles
where she later lived and worked and founded the AME church. “I
wanted to celebrate this woman's perseverance and generosity,” says
de Bretteville. “Now everyone who comes to this place will know
about her and the city that benefited from her presence here.” A
designed tactile environment, which included the imperfections in
the slate and concrete wall, required working with processes and
materials that were often unpredictable-like the struggles Biddy
faced in her life.
Biddy Mason and the grandparent windows connect design
practice and teaching as de Bretteville encourages her students to
use their knowledge, skills, and passion to connect to the
community through design.
Graphic Design: A Career Guide and Education Directory
Edited by Sharon Helmer Poggenpohl
The American Institute of Graphic Arts
Suppose you want to announce or sell something, amuse or
persuade someone, explain a complicated system or demonstrate a
process. In other words, you have a message you want to
communicate. How do you “send” it?
Section: Tools and Resources -
There are probably as many kinds of designers as there are kinds of
design, so how do you know whether a career in design might be
right for you?
Tips for design students on finding the first job.
Section: Tools and Resources -
It’s tempting to cling to craft
and tools as the core of design curricula. But with some research and
experimentation in the classroom, author David Sherwin found a new model for
helping students to become more creative, collaborative and resilient (and more
employable in the process).
Section: Tools and Resources -
job search, professional development, motivation, teaching
What does a project manager do, and does your design firm need one? Emily Carr rises to the challenge, takes ownership of the issue and delivers some solutions.
Section: Tools and Resources -
collaboration, new business development, studio management
Senior DesignerAmerican Association for Cancer Research
Philadelphia, PennsylvaniaMay 23 2013
Ambitious Type: Sean Freeman
May 16, 2013
LPforDesign (The LivingPrinciples)
RT @sustainbrands: Sustainable Brands '13 video feed registration is now open. RSVP free: http://t.co/spE0WXVfw0
LPforDesign (The LivingPrinciples)
Nice video of @Patagonia founder & CEO, Yvon Chouinard & @Makower about 'The company as activist' http://t.co/oCjWzlomPM
Starbucks VIA Packaging
Fail Safe: Debbie Millman’s Advice on Courage and the Creative Life
Posted by Maria Popova
10 days ago from | 1 | 2 |
<urn:uuid:218e2553-d742-457b-9271-eb2fb6409684> | more a grant application, really)
The aim is to create a complete
philosophy of mathematics based directly on applied mathematics, taking the
view that mathematics is not about other-worldly entities like numbers or sets,
nor a mere language of science, but a direct science of structural features of
the real world like symmetry, continuity and ratios.
Applied mathematicians take it for
granted they are studying certain real features of the world - properties like
symmetry and continuity. Modern developments in mathematics such as chaos
theory and computer simulation have confirmed that view, but traditional
philosophy of mathematics has remained fixated instead on complicated formal
results concerning the simplest mathematical entities, numbers and sets. Using
straightforward examples that exhibit the richness of the mathematical study of
complexity, the grant project will develop an Aristotelian realist philosophy
of mathematics that challenges the usual Platonist and other classical options.
In argument readable by an educated philosophical or scientific audience, it
shows how mathematics finds the necessities hidden below the surface of our
For most of the twentieth century, the
philosophy of mathematics was dominated by the competing schools of logicism,
formalism and intuitionism, all of which emphasised the role of human thought
and symbols in creating mathematics. Dating from around 1900, they were
generally regarded as unsatisfactory, especially in explaining applied
mathematics. (Körner 1962) For example logicism, the theory developed by Frege
and Russell that mathematics is just logic, proved untenable on technical
grounds as well as giving no insight into how trivial logical truths could
prove so useful in dealing with the real world.
Those schools shared this problem with
Platonism, the traditional alternative according to which mathematics is about
an abstract or other-worldly realm inhabited by numbers, sets and so on;
Platonism always found it hard to explain the mysterious connection between
that other world and the real objects of our world which are counted and
weighed. Platonism also has significant epistemological problems, being
susceptible to Benacerraf's challenge (1973).
The challenge is to explain how knowledge of mathematics is possible,
given (i) a broadly causal approach to epistemology, and (ii) the view that
mathematical objects are abstract.
Despite this difficulty, many working mathematicians continue to find
Platonism attractive, in part because it seems to be the only realist position
By the time of Eugene Wigner's celebrated
1960 article `The unreasonable effectiveness of mathematics in the natural
sciences', it was clear that new directions in the philosophy of mathematics
were needed. In the last thirty years, there has been a diverse range of
responses to the impasse, but there has been no agreement on what is the
leading direction, or even consensus within particular schools on whether the
problem of the applicability of mathematics is adequately solved. Much of the
best work has been in a Platonist direction. Works such as Colyvan 2001a and
2001b have showed that Platonism has substantial resources and is not easily
dismissed, while Steiner 1998 presented a direct Platonist attack on the
problem of the applicability of mathematics. Nevertheless we believe (for
reasons to be developed more fully in our project) that these authors have not
succeeded in dealing with the argument advanced originally by Aristotle, that
sciences of the real world should be able to deal with real properties
directly, and reference to abstract objects in another world creates
philosophical difficulties without being necessary for explaining the necessary
interconnections between the real properties. In particular we believe an adequate
epistemology for realism has yet to be developed. For this reason we also
disagree with the school led by Resnik (1997) and Shapiro (1997, 2004)
(surveyed in Reck and Price 2000 and Parsons 2004) Although like us they accept
the slogan "mathematics is the science of structure" and they have made many
perceptive observations on the way mathematics looks at structure and patterns,
their theory is in our view vitiated as a complete philosophy of mathematics by
their tendency to regard "structures" as a kind of Platonist entity similar to
numbers and sets.
There have also been nominalist philosophies of mathematics (Field 1980,
Azzouni 1994, Chihara 2004), which we believe are subject to the insurmountable
obstacles that dog nominalism in general. As with the Platonists, they speak as
if Platonism and nominalism are the only alternatives, whereas Aristotelian
realists believe those two schools make the same error, of supposing that
everything that exists is an individual (whether physical or abstract). The nominalists
did however usefully describe some possibilities of discussing mathematical
realities without reference to Platonist abstract entities.
One of the more important developments in
philosophy of mathematics in the last quarter of the twentieth century is the
rise of indispensability arguments for mathematical realism.
According to the Quine-Putnam
indispensability arguments, we must believe in the existence of mathematical
objects if we accept our best physical theories at face value.
Our best physical theories make
indispensable reference to mathematical objects. We agree that indispensability
arguments are important but believe their significance has been misunderstood
because of the Platonism-or-nominalism dichotomy being assumed. That encourages
a fundamentalist attitude to mathematical language, as if numbers must either
exist fully as abstact entities, or not exist in any way at all. Some subtlety
is needed as to what exactly is concluded to be indispensable. (Baker 2003).
Moreover, care must be taken so as to make room in naturalism for the
distinctive methods employed in generating mathematical knowledge (Maddy 1992).
Instead, we will argue, mathematical language is
indeed about some real aspects of the world, but not about abstract objects.
Mathematics does not stand to natural science as a tool stands to a constructed
entity; rather the object of scientific study exemplifies, or instantiates, a
mathematical structure. (What to say of mathematical structures that have no
physical instantiation is an issue that we will also consider carefully.) Thus
a (pure) quantum state is a vector, and a space-time is a
differentiable manifold, and both facts constrain the object in very definite,
mathematically understood, ways.
We will be guided by more hopeful
developments from a number of Australian authors (Armstrong 1988, 1991, Forrest
and Armstrong 1987, Bigelow 1988, Bigelow and Pargetter 1990, Michell 1994, Mortensen
1998), supported by a few overseas writings that are not explicitly in the
philosophy of mathematics (Dennett 1991, Devlin 1994, Mundy 1987) They hark
back to the old theory of medieval and early modern Aristotelians that
mathematics is the "science of quantity" one still visible in some basic
developments of nineteenth-century mathematics (Newstead 2001) but thereafter
ignored.. This work is situated in the Australian realist theory of universals
defended by D.M. Armstrong. Lengths, weights, time intervals and so on are real
properties of things, and so are the relations between those properties. So a
ratio such as 2.71, for example, is conceived to be the (real) relation that
can be shared by pairs of lengths, pairs of weights and pairs of time intervals.
A similar analysis is given of whole numbers like 4, which is a real relation
between a heap of, say, parrots, and the "unit-making" property,
being-a-parrot. This school of thought has unfortunately been little noticed
outside Australia, a situation we hope to remedy. It has also confined itself
to analysing only the most simple and traditional mathematical such as numbers
and sets, thus ignoring the richer mathematical structures like symmetry and
network topology, and the more applied mathematical sciences such as operations
research, where, we believe, the strengths of a structuralist philosophy of
mathematics are both more obvious and better connected with the concerns of
Those concerns have broadened in ways
that demand to be considered philosophically. The last sixty years have seen the creation of a number of new "formal"
or "mathematical" sciences, or "sciences of complexity" - operations research,
theoretical computer science, information theory, descriptive statistics, mathematical
ecology, control theory and others. Theorists of science have almost ignored
them, despite the remarkable fact that (from the way the practitioners speak)
they seem to have come upon the "philosophers' stone" a way of converting
knowledge about the real world into certainty, merely by thinking. (Franklin
1994) In these sciences and more generally in the natural sciences, there has
been a better appreciation of the role of "systems concepts" like "ecosystem", "water cycle",
"energy balance", "feedback" and "equilibrium" are systems concepts. They
provide the language for studying complex interactions. They are generalisable
to other complex systems, such as those in business, and so show the relevance
of scientific systems thinking to the wider world. They unify and give a
perspective on science itself, and on its connections with the science of
complexity, mathematics. (Franklin 2000) The present project will give the
first extended philosophical consideration to the full range of this body of
The part of the project most undeveloped
so far is its epistemology. Once it is established that mathematics deals with
structural aspects of the world, how are those aspects known? Where Platonism
has immense difficulties in explaining how we could know about entities such as
number which it takes to be in "another world", Aristotelian approaches give
promise of a more direct epistemology, since one can sense symmetry (for
example) as well as one can sense colour. Realising that promise is difficult, however,
since one needs to integrate an
Aristotelian theory of abstraction (the cognition of one feature of reality,
say colour, in abstraction from others, such as shape) with what is known from
cognitive psychology on pattern recognition and the comparison of modalities
(for example, how the brain compares felt and seen shape). The well-known role
of proof in establishing mathematical knowledge needs to be integrated as well.
Again, there is little work at present on that topic.
After recalling the
general reasons for accepting an Aristotelian realist position on universals
(these reasons are developed by other writers, but still need collecting and
expounding in a way relevant to the mathematical case), and illustrating them
in the examples just mentioned, we will be in a position to develop the core of
the theory that mathematics is a science of certain real properties. One task
is to distinguish two substantially different kinds of properties that are both
objects of mathematics. An older theory held that mathematics is the "science
of quantity", a newer one that it studies structure or patterns. Both quantity
and structure are real features of the world, but different ones. Both are studied
by mathematics. The division between the two roughly corresponds to the
division between elementary and higher mathematics.
The first component of the project will consist in
an investigation of the indispensability argument and its relation to quantum mechanics.
For while quantum mechanics presents an argument for realism about the complex
number field, it also suggests that this field has primacy. And since this
field subsumes the natural numbers and the reals, it suggests a significant
limitation to the science of quantity conception since that is inextricably
linked with linearly orderable fields. We believe this represents an area of
hitherto untapped connections and arguments that is capable of throwing great
light on the relation between physics and mathematics. Thus one thing we will
be concerned with is the significance of the Montgomery-Olydzko law. This
suggests that the eigenvalues of a random Hermitian matrix (such as might be
found in certain quantum mechanical problems) have the same spacing properties
as the non-trivial zeroes of the Riemann zeta function - which are not spaced
randomly. This is now fairly widely confirmed - but it suggests a connection
between very different areas of science: between the traditional a priori and
traditional a posteriori. (What role quantum mechanics is itself playing in
this connection is still an unsolved question. Professor Barry Mazur of Harvard
has made some interesting comments to us on this problem.)
We will develop arguments that the Aristotelian
realist view has the greatest chance of explaining this connection; just as it
has the best chance of explaining what we call inverse indispensability in
general. On this argument there is also an "unreasonable dependence" of
mathematics on physics. The discovery of the infinite number of exotic
differential structures on four dimensional manifolds (making four dimensional
manifolds unique in differential geometry) offers a very striking example of
this phenomenon - since the exotic structures arose out of mathematical
physics. This inverse indispensability can only really be explained, we argue,
on the Aristotelian view.
After establishing our
metaphysical case arguing for our view that mathematics studies structural
aspects of the real world we will move to epistemological issues. Theory on
how mathematics can be known is an underdeveloped part of structural
philosophies of mathematics, and is well recognised as a major difficulty for
realist philosophies of mathematics in general. In this second component of the
project, we will show that Benacerrafs challenge can be overcome by our brand
of realism. The fundamental dilemma for realists was identified (by Benacerraf
1973) as the problem of providing a naturalistic (or broadly causal)
epistemology for mathematics, if mathematics indeed refers to something real.
How can those objects affect us, so that we can know about them? That is very
difficult to explain on a Platonist view, since Platonic objects do not have
causal power. Aristotelian views such as ours permit us to develop a much more
plausible and direct answer, since structural features of real things, such as
symmetry, can affect us in the same way as, for example, their colour, and so
can be directly perceived. On our Aristotelian view, the objects of mathematics
do not exist outside of space and time, but are immanent in space and
time. Consequently, we hold that some
simple mathematical ideas are indeed acquired in a causal manner.
It is true that some of
the more complicated entities spoken of in mathematics, such as the Hilbert
spaces of quantum mechanics, do not seem to be directly perceivable. In order
to move from simple perception of patterns to sophisticated mathematical
theorising, it is necessary to form abstract ideas of structures and
quantities. Therefore, we (and especially the research assistant employed by
the grant) will pay special attention to the role of abstraction in generating
mathematical knowledge. Aristotelians hold that mathematicians abstract or
"separate in thought" features of objects that they perceive in the real world.
We will survey various interpretations of abstraction, and present a theory on
which abstraction draws attention to mathematical features of existing physical
objects (but does not bring into existence any kind of Platonist "abstract
objects"). We anticipate the objection
that the natural world does not have the perfect precise structures needed in
mathematics. We will therefore consider
the role of idealisation in abstraction, and compare it to the uses of
idealisation in physics (e.g. massless points and frictionless planes). In neither case should idealisation
undermine the reality of the phenomena studied.
We will rebut various
objections that have been raised against the meaningfulness of possibility of
abstraction, notably by Frege (1884/1950). Freges objections are an important
reason for the neglect of an Aristotelian approach. However, we will
demonstrate that Freges criticisms do not touch Aristotelian realism. In particular, objections having to do with
how the individuality of mathematical objects is preserved if they are obtained
by abstraction do not apply to our theory.
Since mathematical objects are universals, they are not individual particulars
and not subject to this objection.
The several components
of the project cohere very well, since a proper understanding of the
indispensability of mathematics and physics to one another yields rich results
in metaphysics and epistemology.
Finally, the theoretical work of the project is complemented throughout
by the extensive knowledge of a working mathematician.
The main lines along
which our argument should proceed are clear, but there is much detailed work to
be done to consider and reinterpret existing material, and to ensure coherence
between the various parts of the project metaphysical, epistemological,
mathematical and quantum-mechanical. We
anticipate finding that the understanding of the relation of mathematics and
physics produced by consideration of the indispensability argument in the first
part of the project will shape our epistemology in the second part of the
project. Throughout our findings will
be grounded by the examples of a working mathematician.
In the light of this
plan, we would anticipate the three years of the work on the grant being
structured as follows:
1: CI Franklin to complete current writing on "quantity" as an object of
mathematics, CI Heathcote to research and write on issues relating to quantum
mechanics, both CIs to work with research assistant on initial research on
epistemological issues of abstraction, pattern recognition and proof.
2: Research assistant to work intensively on epistemology, with input from CIs;
research assistant or CI Heathcote to visit Cambridge and St Andrews for
conferences; submission of several academic papers to journals; planning of
book and negotiation with possible publishers.
3: Completion and submission of book containing the full work, probably to
Oxford University Press.
Armstrong, D.M., 1988,
`Are quantities relations? A reply to Bigelow and Pargetter Philosophical
Studies 54, 305-16.
Armstrong, D.M., 1991,
`Classes are states of affairs', Mind 100, 189-200.
Azzouni, J., 1994, Metaphysical
Myths, Mathematical Practice, Cambridge, Cambridge University Press.
Baker, A., `The
indispensability argument and multiple foundations of mathematics, Philosophical
Quarterly 53, 49-67.
Benacerraf, P., 1965,
`What numbers could not be, Philosophical Review 74, 495-512.
Benacerraf, P. 1973.
Mathematical Truth, Journal of Philosophy 70, 661-79.
Bigelow, J., 1988, The
Reality of Numbers: A Physicalist's Philosophy of Mathematics, Clarendon,
Bigelow J. and R.
Pargetter, 1990, Science and Necessity, Cambridge University Press,
Chihara, C.S., 2004, A
Structural Account of Mathematics, Clarendon, Oxford.
Colyvan, M., 2001a, The
Indispensability of Mathematics, Oxford University Press, New York.
Colyvan, M., 2001b, `The
miracle of applied mathematics, Synthese 127, 265-77.
Dennett, D., 1991, `Real
patterns, Journal of Philosophy 88, 27-51.
Devlin, K.J., 1994, Mathematics:
The Science of Patterns, Scientific American Library, New York.
Field, H., 1980, Science
Without Numbers: A Defence of Nominalism, Princeton University Press,
Fine, K., 2001. Limits
of Abstraction. Oxford University
Forrest P. and D.M.
Armstrong, 1987, `The nature of number', Philosophical Papers 16,
Frege, G., 1884/1950. Foundations
of Arithmetic. Blackwell, Oxford.
J., 1989, `Mathematical necessity and reality', Australasian J. of Philosophy 67, 286-294.
J., 2000, `Diagrammatic reasoning and modelling in the imagination, the secret
weapons of the Scientific Revolution', in
1543 and All That, Image and Word, Change and Continuity in the
Proto-Scientific Revolution, ed. G.
Freeland & A. Corones (Kluwer, Dordrecht), pp. 53-115
Franklin, J., 1994, `The
formal sciences discover the philosophers' stone', Studies in History and
Philosophy of Science 25,
Franklin, J., 2000,
`Complexity theory, mathematics and the unity of science', History,
Philosophy and New South Wales Science Teaching Third Annual Conference, ed. M. Matthews, pp. 91-4.
Franklin, J., 2003, Corrupting
the Youth: A History of Philosophy in Australia, Macleay Press, Sydney.
Hale, B., 1996.
Structuralisms Unpaid Epistemological Debts, Philosophia Mathematica,
Heathcote, A., 1990,
`Unbounded operators and the incompleteness of quantum mechanics, Philosophy
of Science S90, 523-34.
Körner, S., 1962, The
Philosophy of Mathematics: An Introduction, Harper, New York.
Mac Lane, S., 1986, Mathematics:
Form and Function, Springer, New York.
Maddy, P., 1990. Realism
in Mathematics, Clarendon Press, Oxford.
Maddy, P., 1992.
Indispensability and Mathematical Practice, Journal of Philosophy 89,
Maddy, P., 1997. Naturalism
in Mathematics, Clarendon Press, Oxford.
Michell, J., 1994,
`Numbers as quantitative relations and the traditional theory of measurement, British
Journal for the Philosophy of Science 45, 389-406.
Mortensen, C., 1998, `On
the possibility of science without numbers, Australasian J. of Philosophy 76,
Mundy, B., 1987, `The
metaphysics of quantity, Philosophical Studies 51, 29-54.
Newstead, A.G.J., 2001,
`Aristotle and modern mathematical theories of the continuum, in D.
Sfendoni-Mentzou, ed, Aristotle and Contemporary Science, vol. 2, Lang,
Parsons, C., 2004,
`Structuralism and metaphysics, Philosophical Quarterly 54,
1951/1980. Two dogmas of empiricism, in From a Logical Point of View,
Harvard University Press, Harvard.
Reck, E., and M. Price,
2000, `Structures and structuralism in contemporary philosophy of mathematics,
Synthese 125, 341-383.
Resnik, M.D., 1997, Mathematics
as a Science of Patterns, Clarendon, Oxford.
Shapiro, S., 1997, Philosophy
of Mathematics: Structure and Ontology, Oxford University Press, New York.
Shapiro, S., 2004,
`Foundations of mathematics: metaphysics, epistemology, structure, Philosophical
Quarterly 54, 16-37.
Steiner, M., 1975. Mathematical
Knowledge, Cornell University Press, Ithaca.
Steiner, M., 1998, The
Applicability of Mathematics as a Philosophical Problem, Harvard University
Weyl, H., 1952, Symmetry,
Princeton University Press, Princeton.
Wigner, E., 1960, `The unreasonable
effectiveness of mathematics in the natural sciences, Communications in
Pure and Applied Mathematics 13, 1-14.
This site created by James Franklin with help from | 1 | 5 |
<urn:uuid:e4a8fc4e-ffce-4401-a07c-9f3a2fba900d> | Understanding Herbicide Resistance
of an Enzyme in the “Pigments of Life”
At the ARS Natural Products Utilization Research Unit in Oxford, Mississippi, support scientist Susan Watson extracts a sample of pigments from leaf tissue for high-performance liquid chromatography analysis by plant physiologist Franck Dayan.
An Agricultural Research Service scientist in Oxford, Mississippi, is working toward developing new herbicides by focusing on a molecular pathway that not only controls weeds in soybean fields, but might also have helped shape our nation’s history.
Franck Dayan, a plant physiologist with the ARS Natural Products Utilization Research Unit in Oxford, is an expert on a class of weed killers known as “PPO herbicides,” which choke off the weed’s ability to make chlorophyll. His efforts are increasingly important because weeds are beginning to develop resistance to glyphosate, the world’s most widely used herbicide, and alternatives are needed.
Much of Dayan’s work focuses on a class of ring-shaped pigment molecules known as porphyrins (pronounced POR-fer-ins) that “bind” or react with different metals and perform vital functions in both plants and animals. Chlorophyll is a porphyrin that binds magnesium, giving plants their green pigment and playing a pivotal role in photosynthesis. Heme is a porphyrin that binds iron as an essential step in supplying oxygen to animal blood cells.
One of the key steps in porphyrin synthesis is performed by an enzyme (protoporphyrinogen oxidase, or PPO), and its disruption can cause problems in plants and animals. In humans, disruption of the PPO enzyme is associated with a congenital disease known as “porphyria,” with symptoms that may include light sensitivity, seizures, and neuropsychiatric problems. Scholars have argued that a case of porphyria in King George III may have contributed to the colonies’ struggle for independence. (See sidebar.)
In plants, PPO herbicides work by disrupting the enzyme’s production of porphyrins, causing harm to the plant. PPO herbicides have been around for almost 40 years and are specifically designed so that they only disrupt PPO enzyme activity in plants and not in humans. “With these herbicides, we are able to intentionally and specifically disrupt plant PPO enzyme activity and do it in a way that cannot possibly have any effect on enzyme activity in humans,” Dayan says.
Dayan recently published a report on the molecular mechanism that can trigger resistance to PPO herbicides in a common weed. Understanding the resistance mechanism should lead to better herbicides.
Plant physiologist Franck Dayan observes wild-type and herbicide-resistant biotypes of pigweed (Palmer Amaranth) as Mississippi State University graduate student Daniela Ribeiro collects plant samples for DNA analysis.
Working in the Weeds
Since the mid-1990s, glyphosate use in crop fields has been so successful that interest in research and development of alternative weed killers had been on the wane. Many experts considered it too difficult to come up with an herbicide that could match glyphosate for cost and effectiveness, Dayan says. But with weeds developing resistance to glyphosate, interest in PPO herbicides is picking up. Herbicides have also become essential tools in modern agriculture, increasing the ability to control weeds to a point where growers are better able to adopt environmentally friendly practices, such as no-till cropping systems.
“Glyphosate still plays a dominant role in weed control in soybeans and other crops, but with glyphosate resistance, there is renewed interest in herbicides that inhibit the PPO enzyme,” Dayan says.
Scientists recently showed that waterhemp (Amaranthus tuberculatus), a common weed, developed resistance to PPO herbicides by deleting an amino acid known as “glycine 210” from the PPO enzyme. Such an evolutionary mechanism is unusual. Enzymes and proteins are made up of amino acids, but when a plant develops resistance to a weed killer, it is usually because one amino acid in an enzyme is substituted for another—not deleted. “This was the first time that resistance caused by a deletion was ever seen,” Dayan says.
Dayan examined the consequences of this amino acid deletion on the PPO enzyme by conducting protein-modeling studies of waterhemp. “The question was, How did the deletion of this amino acid allow the plant to become resistant?” says Dayan.
To find the answer, he and his colleagues overlaid the genetic sequence of the enzyme in the resistant waterhemp plants on the genetic sequence of a related enzyme that has a known structure, in this case, the PPO enzyme from tobacco plants. They also compared the molecular structure of enzymes from PPO-susceptible waterhemp to the structure of enzymes from resistant waterhemp. Using that information, they developed a computer-generated, three-dimensional version of the enzyme in the resistant plant.
The work, published in the journal Biochimica et Biophysica Acta, confirmed that an evolutionary change in a single enzyme—the deletion of an amino acid—caused structural changes in the enzyme-binding site and allowed waterhemp to become resistant to the herbicide. While the structural changes were too insignificant to affect most of the plant’s physiological functions, they did disrupt the PPO enzyme production of porphyrins and caused the enzyme-binding site to become enlarged so that the herbicide did not bind as well.
“The place where the herbicide binds on the enzyme is a key,” Dayan says. Knowing the shape of the binding site will help scientists design herbicides with a different shape that would bind more effectively.
Understanding porphyrins has a practical benefit because of their role in the development of herbicides. But the ubiquitous presence of these ring-shaped molecules, Dayan says, serves as an example of the unified nature of life on Earth. In an article coauthored with his daughter, Emilie Dayan, and published in the May-June 2011 issue of American Scientist, he writes, “They attract little attention, but you find them throughout the plant and the animal kingdom, and life couldn’t exist without them.”—By Dennis O'Brien, Agricultural Research Service Information Staff.
This research supports the USDA priority of promoting international food security and is part of Crop Protection and Quarantine (#304), an ARS national program described at www.nps.ars.usda.gov.
Franck Dayan is in the USDA-ARS Natural Products Utilization Research Unit, Room 2012, University of Mississippi, Oxford, MS 38677; (662) 915-1039.
King George’s Porphyrin Problem
Disruption of the PPO enzyme in humans is rare but is known to cause porphyria, a group of congenital diseases that in one form, known as “variegate porphyria,” can cause symptoms that include temporary paralysis of limbs, sensitivity to light, seizures, hallucinations, and other neuropsychiatric problems. Symptoms can appear intermittently throughout someone’s life.
Agricultural Research Service plant physiologist Franck Dayan notes in American Scientist that porphyrins form pathways that “serve as the assembly line for the most abundant pigments in nature.” Because pigments are involved, people with porphyria may also excrete purplish tint in the urine and feces.
Dayan recounts how several experts have found historical evidence that King George III, monarch of England from 1760 until his death in 1820, had the disease, periodically suffering from abdominal pains, paralysis of the arms and legs, wine-colored urine, and psychiatric problems that eventually forced him into confinement. Some experts have argued that the American Revolution may be partially attributed to the king’s illness because it contributed to his stubbornness in dealing with the colonies.
The king’s illness was portrayed in the 1994 film, “The Madness of King George.”
"Understanding Herbicide Resistance of an Enzyme in the “Pigments of Life”" was published in the August 2012 issue of Agricultural Research magazine. | 1 | 2 |
<urn:uuid:e0539abb-e5a2-4502-8566-f507c82fdcf0> | CTS IIT MADRAS
1. slur : speech
ans: smulge : writing (choice is B)
2. cpahlet : shoulder
ans: ring : finger (choice is C)
3. vernanlar : place
ans: finger print : identical (choice is B)
ans: emaciated (choice is D)
ans: pragmate (choice is D)
ans: clumsy (choice is B)
12 -14: each sentense is broke to four sections a,b,c,d.choose which has mistake mark (e) if you find no mistake.
a) phylchologists pointout that b)there are human processes
c)which does not involve d) the use of words (choice is A)
a)jack ordered for b)two plates of chicken c)and a glass d)of water (choice is A)
14: a) politics is b) (choice is A) (are)
16 - 20: each question of group of questions is based on a passage or a set of conditions for each question,select the best answer choice given.
(i).if it is fobidden by law if the object of agreement is the doing of an act, that is forbidden by law the agreement is void.
(ii). if it is of the nature that,it would defeat the provision of any law is the agreement is void.if the object of agreement is such that thing got directly forbidden by law it would defeat the provision of statuary law.
(iii). if the object of agreement is fraddulent it is void.
(iv). an object of agreement is void if it involves or implies to the personnal property of another.
(v). an object of agreement is void where the constant regards as ignored.
(vi). an object of agreement is void where the constant regards is as opposed to public policy.
17. A,B,C enter an agreement for the division a many them of gains acqest or by be acquit by them by them by the argument is void as
Ans: ---- (choice is D)
21-25) An algorithem follws a six step process za,zb,zc,zd,ze,zf, it is governed by follwing
(i) zd should follw ze
(ii) the first may be za,zd or zf
(iii) zb and zc have to be performed after zd
(iv) zc must be immediately after zb
22) if za the first set zd must be
a) 3rd b)5th c)2nd d)4th
Ans:- A or D (probably a)
23) zf can be 3rd or 5th------any of the six, first, second or forth only, any of first four only none these
24) if zb must follw za then
a)za can only 3rd or fourth b) first or second c) can not be third d) fouth or fifth e)none
25) ze is third term the no of diff operations possible is
Ans:- D (dabad)
26-31) ravi plans six sep-- x,y,z,w,u,v in rows no 1 to 6 ,according to the follwing conditions he must plant x before y and u he must plant y " wthe 3rd has to be z
26) which could be in order
a) xuywzv b) xvzyuw c)zuyxwv d)zvxuwy e) wyzuvx
27) which is true
a) z before v b) z before x c) w before u d) y before u e) x before w
28) if he plans v first which is second x,y,z,w,u so ans is 'x'. choice is A.
29) which is true
a) x,3 b)y,6 c)z,1 d)w,2 e)u,6
30) if he plans b 6th which would be first and second
a) x and w b) x and y c)y and x d)w and z e) w and u
31) if he plans w before u and after v he should plan w at
a) first b)second c)fourth d)fifth e)sixth
36)at a certain moment a watch showes 2 min lag althogh it is fast.if it showed a 3 min lag at that moment ,but gain 1/2 min more a day than it does. it would show the true time one day sooner than it usually does .how many mins does the watch gain per day.
a).2 b).5 c).6 d).4 e).75
Ans : e
37)in 400m race a gives b a start of 7 sec & beats by 24m.in another race a beats by 10 sec.the speeds are
a)8,7 b)7,6 c)10,8 d)6,8 e)12,10
38)3x+4y=10 x cube+y cube=6 minimum value of 3x+11y=?
41)sink---7.7kms---> fills 2 1/4 t is 5.5 min. 92 tonnes enough.. sink throws out 18 tonnes/hr. avg. speed to
a)1.86 b)8.57 c)9.4 d)11.3 e)10.7
42) . . ______
/ 2 2 cms
/ _a_ ______
/ 3 2 cms area of the d=50 cm square
/___b___ ______ what is the area of the b=?
/ 4 2 cms
/ 5 2 cms Ans=(10.7)
43) 600 tennis players 4% ->wrist band on one wrist of remain 96%->25%->on both hands remain no of -
44)312(doubt) or 432
45)in how many ways 5e,6s,3f be arranged if books of each language are to be kept together 17,64800,90,58400,3110400
47)three types of the a,b,c costs Rs. 95/kg,100/kg&70/kg .how many kg of each be blended to produce 100 kg of mixture worth Rs.90/kg,gives that the quntities of b&c are equal a)70,15,15 b)50,25,25 c)60,20,20 d)40,30,30
Ans:b48)water milk problem
q)two distinct no's are taken from 1,2,3,4......28
a)probably that the no is 6 -->1/14
b)probably that it exceeds 14 -->1/28
c)both exceed 5 is 3/28
d)less than 13->25/28 (24/28) | 1 | 3 |
<urn:uuid:df5bc47a-2829-48d8-87a7-da9872b8596d> | ||This article needs additional citations for verification. (September 2011)|
A Framebuffer (or sometimes Framestore) is a video output device that drives a video display from a memory buffer containing a complete frame of data.
The information in the memory buffer typically consists of color values for every pixel (point that can be displayed) on the screen. Color values are commonly stored in 1-bit binary (monochrome), 4-bit palettized, 8-bit palettized, 16-bit highcolor and 24-bit truecolor formats. An additional alpha channel is sometimes used to retain information about pixel transparency. The total amount of the memory required to drive the framebuffer depends on the resolution of the output signal, and on the color depth and palette size.
Framebuffers differ significantly from the vector displays that were common prior to the advent of the framebuffer. With a vector display, only the vertices of the graphics primitives are stored. The electron beam of the output display is then commanded to move from vertex to vertex, tracing an analog line across the area between these points. With a framebuffer, the electron beam (if the display technology uses one) is commanded to trace a left-to-right, top-to-bottom path across the entire screen, the way a television renders a broadcast signal. At the same time, the color information for each point on the screen is pulled from the framebuffer, creating a set of discrete picture elements (pixels).
Computer researchers had long discussed the theoretical advantages of a framebuffer, but were unable to produce a machine with sufficient memory at an economically practicable cost. In 1969, Joan Miller of Bell Labs experimented with the first known instance of a framebuffer. The device displayed an image with a color depth of three bits. However, it was not until the 1970s that advances in integrated-circuit memory made it practical to create the first framebuffer capable of holding a standard video image.
In 1972, Richard Shoup developed the SuperPaint system at Xerox PARC. This system had 311,040 bytes of memory and was capable of storing 640 by 480 pixels of data with 8 bits of color depth. The memory was scattered across 16 circuit boards, each loaded with multiple 2-kilobit shift register chips. While workable, this design required that the total framebuffer be implemented as a 307,200 byte shift register that shifted in synchronization with the television output signal. The primary drawback to this scheme was that memory was not random access. Rather, a given position could be accessed only when the desired scan-line and pixel time rolled around. This gave the system a maximum latency of 33 ms for writing to the framebuffer.
Shoup was also able to use the SuperPaint framebuffer to create an early digital video-capture system. By synchronizing the output signal to the input signal, Shoup was able to overwrite each pixel of data as it shifted in. Shoup also experimented with modifying the output signal using color tables. These color tables allowed the SuperPaint system to produce a wide variety of colors outside the range of the limited 8-bit data it contained. This scheme would later become commonplace in computer framebuffers.
In 1974 Evans & Sutherland released the first commercial framebuffer, costing about $15,000. It was capable of producing resolutions of up to 512 by 512 pixels in 8-bit grayscale, and became a boon for graphics researchers who did not have the resources to build their own framebuffer. The New York Institute of Technology would later create the first 24-bit color system using three of the Evans & Sutherland framebuffers. Each framebuffer was connected to an RGB color output (one for red, one for green and one for blue), with a minicomputer controlling the three devices as one.
In 1975, the UK company Quantel produced the first commercial full-color broadcast framebuffer, the Quantel DFS 3000. It was first used in TV coverage of the 1976 Montreal Olympics to generate a picture-in-picture inset of the Olympic flaming torch while the rest of the picture featured the runner entering the stadium.
The rapid improvement of integrated-circuit technology made it possible for many of the home computers of the late 1970s (such as the Apple II) to contain low-color framebuffers. While initially derided for poor performance in comparison to the more sophisticated graphics devices used in computers like the Atari 400, framebuffers eventually became the standard for all personal computers. Today, nearly all computers with graphical capabilities utilize a framebuffer for generating the video signal.
Framebuffers also became popular in high-end workstations throughout the 1980s. SGI, Sun Microsystems, HP, DEC and IBM all released framebuffers for their computers. These framebuffers were usually of a much higher quality than could be found in most home computers, and were regularly used in television, printing, computer modeling and 3D graphics.
Amiga computers, due to their special design attention to graphics performance, created in the 1980s a vast market of framebuffer based graphics cards. Noteworthy to mention was the graphics card in Amiga A2500 Unix, which was in 1991 the first computer to implement an X11 server program as a server for hosting graphical environments and the Open Look GUI graphical interface in high resolution (1024x1024 or 1024x768 at 256 colors). The graphics card for A2500 Unix was called the A2410 (Lowell TIGA Graphics Card) and was an 8-bit graphics board based on the Texas Instruments TMS34010 clocked at 50 MHz. It was a complete intelligent graphics coprocessor. The A2410 graphics card for Amiga was co-developed with Lowell University. Other noteworthy Amiga framebuffer based cards were: the Impact Vision IV24 graphics card from GVP, an interesting integrated video suite, capable of mixing 24-bit framebuffer, with Genlock, Chromakey, TV signal pass-thru and TV in a window capabilities; the DCTV a graphics card and video capture system; the Firecracker 32-bit graphics card; the Harlequin card, the Colorburst; the HAM-E external framebuffer. The Graffiti external graphics card is still available on the market.
Most Atari ST (Mega STE model), and Atari TT framebuffers were created for the VME rear connector slot of Atari machines dedicated to video expansion cards: Leonardo 24-bit VME graphics adapter, CrazyDots II 24-bit VME graphics card, Spektrum TC graphics card, NOVA ET4000 VME SVGA graphics card (capable of resolutions up to 1024x768 at 256 colors or 800x600 at 32768 colors), whose design came from the ISA/PC world (it was effectively an ATI Mach32 S: with 1 MB of video RAM).
Framebuffers used in personal and home computing often had sets of defined "modes" under which the framebuffer could operate. These modes would automatically reconfigure the hardware to output different resolutions, color depths, memory layouts and refresh rate timings.
In the world of Unix machines and operating systems, such conveniences were usually eschewed in favor of directly manipulating the hardware settings. This manipulation was far more flexible in that any resolution, color depth and refresh rate was attainable – limited only by the memory available to the framebuffer.
An unfortunate side-effect of this method was that the display device could be driven beyond its capabilities. In some cases this resulted in hardware damage to the display. More commonly, it simply produced garbled and unusable output. Modern CRT monitors fix this problem through the introduction of "smart" protection circuitry. When the display mode is changed, the monitor attempts to obtain a signal lock on the new refresh frequency. If the monitor is unable to obtain a signal lock, or if the signal is outside the range of its design limitations, the monitor will ignore the framebuffer signal and possibly present the user with an error message.
LCD monitors tend to contain similar protection circuitry, but for different reasons. Since the LCD must digitally sample the display signal (thereby emulating an electron beam), any signal that is out of range cannot be physically displayed on the monitor.
Framebuffers have traditionally supported a wide variety of color modes. Due to the expense of memory, most early framebuffers used 1-bit (2-color), 2-bit (4-color), 4-bit (16-color) or 8-bit (256-color) color depths. The problem with such small color depths is that a full range of colors cannot be produced. The solution to this problem was to add a lookup table to the framebuffers. Each "color" stored in framebuffer memory would act as a color index; this scheme was sometimes called "indexed color".
The lookup table served as a palette that contained data to define a limited number (such as 256) of different colors. However, each of those colors, itself, was defined by more than 8 bits, such as 24 bits, eight of them for each of the three primary colors. With 24 bits available, colors can be defined far more subtly and exactly, as well as offering the full range gamut which the display can show. While having a limited total number of colors in an image is somewhat restrictive, nevertheless they can be well chosen, and this scheme is markedly superior to 8-bit color.
The data from the framebuffer in this scheme determined which of the colors in the palette was for the current pixel, and the data stored in the lookup table (sometimes called the "LUT") went to three digital-to-analog converters to create the video signal for the display.
The framebuffer's output data, instead of providing relatively crude primary-color data, served as an index – a number – to choose one entry in the lookup table. In other words, the index determined which color, and the data from the lookup table determined precisely what color to use for the current pixel.
In some designs it was also possible to write data to the LUT (or switch between existing palettes) on the run, allowing to divide the picture into horizontal bars with their own palette and thus render an image that had a far wider [than X colors] palette. For example, viewing an outdoor shot photograph, the picture could be divided into four bars, the top one with emphasis on sky tones, the next with foliage tones, the next with skin and clothing tones, and the bottom one with ground colors. This required each palette to have overlapping colors, but carefully done, allowed great flexibility.
While framebuffers are commonly accessed via a memory mapping directly to the CPU memory space, this is not the only method by which they may be accessed. Framebuffers have varied widely in the methods used to access memory. Some of the most common are:
- Mapping the entire framebuffer to a given memory range.
- Port commands to set each pixel, range of pixels or palette entry.
- Mapping a memory range smaller than the framebuffer memory, then bank switching as necessary.
Many systems attempt to emulate the function of a framebuffer device, often for reasons of compatibility. The two most common "virtual" framebuffers are the Linux framebuffer device (fbdev) and the X Virtual Framebuffer (Xvfb). The X Virtual Framebuffer was added to the X Window System distribution to provide a method for running X without a graphical framebuffer. While the original reasons for this are lost to history, it is often used on modern systems to support programs such as the Sun Microsystems JVM that do not allow dynamic graphics to be generated in a headless environment.
The Linux framebuffer device was developed to abstract the physical method for accessing the underlying framebuffer into a guaranteed memory map that is easy for programs to access. This increases portability, as programs are not required to deal with systems that have disjointed memory maps or require bank switching.
Since framebuffers are often designed to handle more than one resolution, they often contain more memory than is necessary to display a single frame at lower resolutions. Since this memory can be considerable in size, a trick was developed to allow for new frames to be written to video memory without disturbing the frame that is currently being displayed.
The concept works by telling the framebuffer to use a specific chunk of its memory to display the current frame. While that memory is being displayed, a completely separate part of memory is filled with data for the next frame. Once the secondary buffer is filled (often referred to as the "back buffer"), the framebuffer is instructed to look at the secondary buffer instead. The primary buffer (often referred to as the "front buffer") becomes the secondary buffer, and the secondary buffer becomes the primary. This switch is usually done during the vertical blanking interval to prevent the screen from "tearing" (i.e., half the old frame is shown, and half the new frame is shown).
Most modern framebuffers are manufactured with enough memory to perform this trick even at high resolutions. As a result, it has become a standard technique used by PC game programmers.
As the demand for better graphics increased, hardware manufacturers created a way to decrease the amount of CPU time required to fill the framebuffer. This is commonly called a "graphics accelerator" in the Unix world.
Common graphics drawing commands (many of them geometric) are sent to the graphics accelerator in their raw form. The accelerator then rasterizes the results of the command to the framebuffer. This method can save from thousands to millions of CPU cycles per command, as the CPU is freed to do other work.
While early accelerators focused on improving the performance of 2D GUI systems, most modern accelerators focus on producing 3D imagery in real time. A common design is to send commands to the graphics accelerator using a library such as OpenGL or DirectX. The graphics driver then translates those commands to instructions for the accelerator's graphics processing unit (GPU). The GPU uses those microinstructions to compute the rasterized results. Those results are bit blitted to the framebuffer. The framebuffer's signal is then produced in combination with built-in video overlay devices (usually used to produce the mouse cursor without modifying the framebuffer's data) and any analog special effects that are produced by modifying the output signal. An example of such analog modification was the spatial anti-aliasing technique used by the 3dfx Voodoo cards. These cards add a slight blur to output signal that makes aliasing of the rasterized graphics much less obvious.
At one time there were many manufacturers of graphics accelerators, including: 3dfx; ATI; Hercules; Trident; Nvidia; Radius; S3 Graphics; SiS and Silicon Graphics. However, currently the market is dominated by Nvidia (incorporating 3dfx from 2002) and AMD (who purchased ATI in 2006).
- Richard Shoup (2001). "SuperPaint: An Early Frame Buffer Graphics System" (PDF). IEEE Annals of the History of Computing.
- "History of the New York Institute of Technology Graphics Lab". Retrieved 2007-08-31.
- http://tldp.org/HOWTO/XFree86-Video-Timings-HOWTO/overd.html XFree86 Video Timings HOWTO: Overdriving Your Monitor
- Alvy Ray Smith (May 30, 1997). "Digital Paint Systems: Historical Overview" (PDF). Microsoft Tech Memo 14.
- Wayne Carlson (2003). "Hardware advancements". A Critical History of Computer Graphics and Animation. The Ohio State University.
- Alvy Ray Smith (2001). "Digital Paint Systems: An Anecdotal and Historical Overview" (PDF). IEEE Annals of the History of Computing.
- Interview with NYIT researcher discussing the 24-bit system
- Jim Kajiya – Designer of the first commercial framebuffer
- History of Sun Microsystems' Framebuffers
- DirectFB – An abstraction layer on top of the Linux Framebuffer device
- pxCore - A portable framebuffer abstraction layer for Windows, Windows Mobile, Linux and OSX. | 1 | 6 |
<urn:uuid:62ad26c8-da9d-43d0-b002-75eb5dd76f4a> | Sensor Networks Lab
- SWS: 4, ECTS: 8.0, (for 10.0 , e.g. Media Informatics, you have write a detailed project report)
- Prof. Dr. Klaus Wehrle, Olaf Landsiedel, Jó Ágila Bitsch
- Important dates:
- Weekly meetings: Tuesdays 2pm to 4pm in the I4 Seminar Room (Room 4105, Building E1, Ahornstr. 55, Computer Science Campus)
- Introductory meeting: 3rd of April, i.e. first Tuesday in the term, same place and time as above
- Registration: closed
- Part I: Sensor networks basics: April, May
- Part II: Individual sensor network projects: June to July, maybe August
- Presentation of results: End of July, probably last week of the term
What are sensor networks?
Sensor networks consist of many - up to several thousand - small distributed computing devices that sense and interact with the environment. Various sensors allow a sensor node to measure temperature, sound, vibration, pressure, motion or pollution. Their low price and low energy ensures that sensor networks can be deployed in large numbers. A sensor node consists of a small microcontroller, a radio device, some sensors, and a power supply, usually a battery. Their resources in terms of energy, memory, computational power and bandwidth are severely limited, making sensor nodes an interesting research topic.
Due to their processing and communication abilities sensor networks are intelligent. Thus, the network can independently from human interaction deal with node failure, aggregate measurements from various nodes into meaningful data, reprogram selected nodes for new tasks...
Interesting research areas in sensor networks are:
- Sensor node hardware
- Operating systems for sensor nodes
- Data aggregation and fusion
- Distributed data bases
- Distributed algorithms and computing
- And many many others
Why are sensor networks cool?
In the 1980s, the PC revolution put computing at our fingertips.
In the 1990s, the Internet revolution connected us to an information web that spans the planet.
And now the next revolution is connecting the Internet back to the physical world we live in-in effect, giving that world its first electronic nervous system.
Call it the Sensor Revolution: an outpouring of devices that monitor our surroundings in ways we could barely imagine a few years ago. Some of it is already here. The rest is coming soon
(from Special Report on Sensor Networks, National Science Foundation (NSF), USA)
Why should I take a lab on sensor networks?
To be honest, sensor networks are an ideal candidate for a lab to give you a hands-on experience on distributed systems and communications.
The lab consists of a project (see below) which can be combined with our seminar on massively distributed systems. Thus, instead of doing your seminar about the research papers/topics we proposed, you can do your talk and report about your research project in the lab. Taking the lab automatically guarantees you a slot in the following semester's seminar.
What should I bring?
Now, this is a hands-on lab on distributed systems. Thus, you should bring some knowledge in this area. The requirements are:
- Prediploma or equivalent (e.g. be in a masters program)
- Some lectures in the area of Distributed Systems, Communication Systems and/or Mobile Communication
- Taking (or having taken) our MDS-II lecture "Sensor Networks" is helpful
- Knowledge of C programming, additionally some Java is helpful
- Strong interest and willingness to contribute time
What will I do in the sensor networks lab?
The sensor networks lab consists of two parts: (1) Becoming friends with a sensor node and (2) your first sensor node project.
In the first part, we introduce you to the sensor nodes. Lab sessions and your tasks (hands-on experience) cover
- Introduction to the sensor nodes, including their hardware
- Operating Systems (TinyOS)
- MAC and routing in sensor networks
- Data collection, aggregation, fusion
- Distributed algorithms
After this introduction the second half of the lab will be a project. Thus, you and your teammate(s) will choose (with the help of the teaching assistant) a project. The teaching assistant will give various suggestions of projects that he considers interesting, but you are very welcome to find your own topic. All projects are supposed to address open research problems in sensor networks. | 1 | 2 |
<urn:uuid:07c7c44f-ce53-4f69-b635-498b6bf2fee2> | Celebrating the 55th anniversary of the hard disk
The platters of Big Blue spawn that changed the world
All anniversaries are special, and so is this one. It's particularly special because a billion or more people have been and are being affected by it every day. They switch on their PCs and take advantage of Intel processors and Microsoft's Windows, or Mac OS, thinking nothing of it. But before these, and providing a foundation for them, came spinning disks, rotating hard disk drives, the electro-mechanical phenomenon that the world of computing has depended on for decades: 55 years to be precise.
Like much else in our pervasive IT world the disk drive's roots were laid down by IBM, and first appeared in a product called the 305 RAMAC, The Random Access Method of Accounting and Control, launched this week 55 years ago. Ah, those were the days.
IBM 305 RAMAC in use. The upright disk drive stacks can be seen inside the cabinets.
Cue drum roll
In the 1950s, magnetic drum storage was used with data stored in a recording medium wrapped around the outer surface of a drum. To get higher capacity the drum had to become bigger and bigger. It wasn't very space-efficient but you could have data continuously available instead of having to be read into dynamic memory from sequentially read-in, punched cards.
It's hard to imagine, but once upon a time there was no online data stored separately from a program's data in the computer's dynamic memory, which, of course, disappeared as soon as the application stopped running. The app couldn't be swapped out to disk to let another app in to memory. For one there was no disk and, for two, there were no multi-tasking operating systems.
Magnetic drum memory was a marvel but it was a bit like wrapping papyrus around a barrel; infinitely better than nothing, but not that great actually. What was needed was something with the same random access as drum memory but, somehow, more recording surface, much more, in the same volume.
Suppose you could create a jukebox-like design: by stacking "phonograph records", only rather than grooved vinyl platters read with a steel needle, they would be the same recording media and read method as drum memory – only implemented as records, spinning platters, with concentric tracks of data.
There were engineering papers discussing the concept in the early 1950s but it was IBM, in a fantastic burst of innovation, that produced the first commercial hard disk drive product from its San Jose research lab facility in September, 1956. Univac could have got there first but stopped its own disk drive product so as to prevent the cannibalisation of its existing 18-inch drum memory product; a stumble which was followed by others as Univac became Sperry which became Unisys and has always, but always, been in IBM's shadow.
RAMAC disk stack.
Inside RAMAC there were two independent access arms which moved vertically up a stack of 50 disks, 24-inches in diameter, to the right disk, and then sideways across the target disk's surface to read and write data from the destination track. It took, we understand, 600ms to find a record in a RAMAC. The data capacity was 5MB (8-bit bytes, 7-bits for data plus a parity bit) and it would cost a business $38,400 a year to lease it. There was no popping out to your local Fry's to buy one...
The very first commercial RAMAC was used by Crown Zellerbach of San Francisco, a company dealing in paper, which is apt really – the idea of disk records replacing paper records.
Big Blue's spawn
IBM built and sold a thousand RAMACs, which probably bought the company around $30m – not bad at all. Big Blue stopped making it in 1961, five years after it was introduced, and replaced it with a better disk drive system: the 1405 Disk Storage Unit.
Hitachi GST hard disk drive, 500GB, 3.5-inch, spinning at 7,200rpm.
RAMAC gave birth to dozens and dozens of disk drive companies and formats but over time they were whittled down as manufacturing prowess and volume became as important as basic technology innovation.
IBM got out of the business, selling its disk drive operation to Hitachi GST, which is now being bought by Western Digital. Seagate is buying Samsung's drive operation, and Toshiba has bought Fujitsu's disk business. That's it, these are the disk drive survivors and millions of drives a year are pouring out of their factories.
Today's HDD is a compact and dense collection of technological miracles encased in metal box looking like any other slot-in piece of componentry. Inside are one, two, four or five platters with read/write heads on slider arms and a circuit board with chips on it. Each piece of equipment in there is fantastically highly engineered.
Big Blue's spawn is finally being threatened
Today we have 4 terabyte, 5-platter, 3.5-inch drives with a read/write head for each side of the platters. There is a 3 or 6Gbit/s interface to the host computer, and the drive spins at 7,200rpm with barely a sound. Faster drives that hold a few hundred gigabytes of data spin at 15,000rpm. The 305 RAMAC is a primitive beast indeed compared to today's Barracuda or Caviar drive. But dinosaurs were our ancestors, and RAMAC begat, eventually, Cheetah, Barracuda, Deskstar, Savvio, Caviar and every other disk drive brand you have come across.
Is the end in sight?
Until quite recently there has been no technology available to beat the hard disk drive combination of steadily increasing capacity, data transfer rate, space efficiency, cost and reliability. But the flash barbarians are at the gate with slightly better space efficiency, lower weight, much higher transfer rates, and also much higher cost.
Servers though need more data, much, much more data than before, and disk drives can't keep up. Big Blue's spawn is finally being threatened, and flash is taking over the primary data storage role with disk looking to be relegated to bulk data storage duties. We'll certainly still see disk in use in 10 years' time, but in 20 years' time and 30 years' time?
It's hard to say. However, the 55-year reign of this technology thus far has been an amazing feat and steadily increasing technology complexity combined with, and this is so amazing but taken so much for granted, steadily decreasing cost per megabyte of data stored. The whole HDD story is a testament to the effectiveness of high-volume manufacturing.
Over 160 million spinning disk drives will be delivered this year, with probably more next. They whir away inside our notebooks, PCs and servers, inside their anonymous metal cases, and serve up trillions of bytes of data over their lifetime. And we just take it for granted. That's the best tribute of all really. The technology inside these ordinary-looking metal boxes is so extraordinarily good that it just simply works, day after day after day...
Well, most of the time. Oh, what was that noise, that rending metallic sound? Oh, no, I've had a head crash. My data, oh good Lord, my data... ® | 1 | 2 |
<urn:uuid:33c811f8-a8c2-41fc-aa2a-89c48602707b> | Home / Contents
Assets in the United States
Before, during, and after the United States entered the war, the U.S. government endeavored to deny Germany control over economic assets in, or being brought into, the United States.1 Because the United States took control over so many assets, it undoubtedly seized some that belonged to Holocaust victims, though unintentionally. While U.S. officials were not oblivious to the intensity of Nazi persecution, it would take until the end of the war before policy began to accord special status to victims and their assets. Even in the immediate postwar period, other issues took priority, and it seemed consistently more important to fight the enemy than to aid the victims.
When the Germans invaded a country, the U.S. government assumed that the assets of that country, including those located in the United States, would be used to help the Axis powers and acted to block them. The Treasury Department immobilized foreign-controlled assets while the Alien Property Custodian (APC) seized assets. While the former practice left title with the original owner, the latter transferred title to the U.S. government.
Foreign Funds Control and the "Freezing" of Assets
Freezing Foreign-owned Assets
Germany invaded Denmark and Norway on April 8, 1940, and the United States quickly responded to the aggression. In an attempt to keep the Germans from taking control of Danish and Norwegian assets held in the United States, Executive Order 8389 "froze" all financial transactions involving Danes and Norwegians. The freezing order prohibited, subject to license, credit transfers between banking institutions within the United States and between the United States and foreign banks, payments by or to banking institutions in the United States, all transactions in foreign exchange, the export of gold or silver coin or bullion or currency, and all transfers, withdrawals or exportations of indebtedness or evidences of ownership of property by any person within the United States. It also prohibited acquiring, disposing, or transferring any security bearing foreign stamps or seals, and gave the Secretary of the Treasury the power to investigate, regulate, or prohibit the mailing or importing of securities from any foreign country.2 The executive order provided that willful violation could carry a $10,000 fine, 10 years imprisonment or both.3
The rapid U.S. response was possible only because of long preparation. In issuing Executive Order 8389, the President acted on the basis of the Trading with the Enemy Act of 1917, as amended by Congress in 1933, which provided him with the authority to:
investigate, regulate, or prohibit...by means of license or otherwise any transactions in foreign exchange, transfers of credit between or payments by banking institutions...and export, hoarding, melting, or earmarking of gold or silver coin or bullion or currency by any person within the United States or any place subject to the jurisdiction thereof.4
The U.S. government had first considered the use of such economic weapons in 1937. In response to the Japanese bombing and sinking of the American gunboat Panay in Chinese waters, Herman Oliphant, General Counsel in the Treasury Department, suggested to Treasury Secretary Henry Morgenthau that foreign exchange controls and a system of licenses for financial transactions could be instituted against the Japanese.5 Tensions with Japan subsequently eased and Oliphant's proposals were shelved. But in 1938, after the German annexation of the Sudetenland, reports circulated that the Germans were forcing Czechs to turn over all assets they held in the United States. Such information prompted the Treasury to revisit Oliphant's proposals.
Subsequent German actions, including the occupation of the Czechoslovakian lands of Bohemia and Moravia, further increased support within Treasury for the imposition of freezing controls. Treasury also took the step of verifying with the Justice Department the legality of such controls in the absence of a state of war.6 Although the United States ultimately decided not to respond to Germany's actions in 1938 and 1939, Treasury was prepared to act quickly in April 1940.
As Germany continued its invasions, the U.S. government successively froze assets, country by country, over the European continent. Thus, on May 10, 1940, FFC extended freezing controls to cover the Netherlands, Belgium, and Luxembourg.7 The assets of France and Monaco (June 17), Latvia, Estonia, and Lithuania (July 10), and Romania (October 9) were subsequently frozen that year.8 By the end of April 1941, the United States added Bulgaria, Hungary, Yugoslavia, and Greece to the list.9
The further extension of controls to belligerents and neutrals remained controversial. While Treasury favored a rapid extension of controls, the State Department, concerned about maintaining America's status as a neutral as well as U.S. diplomatic privileges, objected.10 Assistant Secretary of State for Economic Affairs Dean Acheson noted that "from top to bottom our [State] Department, except for our corner of it, was against Henry Morgenthau's campaign to apply freezing controls to Axis countries and their victims."11
Eventually, the course of the war dictated a shift in U.S. policy. On June 14, 1941, through Executive Order 8785, the United States extended freezing controls to cover all of continental Europe, including "aggressor" nations and annexed or invaded territories (Germany and Italy; Danzig, Austria, and Poland) as well as neutral nations, small principalities, and countries not previously included (Spain, Sweden, Portugal, and Switzerland; Andorra, San Marino, and Liechtenstein; Albania and Finland). Turkish assets were never blocked, and Soviet assets were only blocked for a relatively short time until Germany invaded Russia in June 1941.12 As the United States moved from being a neutral to a belligerent, the role of FFC, an administrative agency within the Treasury Department, expanded.
Assets within the United States
Census of Foreign-owned Assets in the United States (1941)
On June 14, 1941, amended regulations under Executive Order 8389 called for a census of foreign-owned assets.13 Every person in the U.S. (including corporations and foreign nationals) was required to report all property held for or owned by a foreign country or national. The Treasury Department reasoned that no one could foresee which nations might yet be overrun, that title changes could occur anywhere, and that compiling comprehensive records of who controlled which assets was vital.14 Treasury was subsequently unwilling to share the information it had gathered even with friendly foreign governments or American creditors.15
The census form listed some thirty types of property: bullion, currency and deposits; domestic and foreign securities; notes, drafts, debts, claims; miscellaneous personal property such as bills of lading, commodity options, merchandise for business use, jewelry, machinery, objects d'art; real property and mortgages; patents, trademarks, copyrights, franchises; estates and trusts; partnerships; and insurance policies and annuities. The value of each asset both in June 1940, and in June 1941, had to be provided, as did extensive information about the persons with an interest in the property (including citizenship, address, date of last entry into the United States, visa type, and alien registration number), to enable the government to trace transfers and changes in the assets. Property whose total value was less than $1,000 did not have to be reported unless its value could not be ascertained, but even assets with values difficult to assess in dollar terms, such as patents or interests in partnerships, had to be reported as did the contents of safe deposit boxes.16
The census revealed that earlier U.S. government estimates, which have been destroyed, had often been inaccurate.17 Generally, Axis holdings had been underestimated while the holdings of German-occupied countries (particularly France and the Netherlands) had been overestimated. A sizeable portion of foreign-owned assets proved to be in the hands of British and Canadian investors. The census showed as well how dominant New York was as a financial center: two-thirds of all reports, and more than three-quarters of all bank and broker reports were filed in the New York district.18
Overall, the 565,000 reports submitted showed the total value of U.S. assets owned by foreign persons or entities in 1941 was $12.7 billion. About two-thirds, or $8.1 billion, of the $12.7 billion total reported belonged to 84,000 persons located in European countries,19 and other than the large United Kingdom ($3.2 billion) holdings, the only other European countries with holdings near $1 billion were Switzerland ($1.2 billion), France ($1 billion), and the Netherlands ($977 million). 20
But Treasury Department fears that Germany might be able to exploit the assets from occupied territories were warranted. The total value of foreign-controlled U.S. assets just from West European countries that were overrun or defeated in 1940--Denmark ($48.1 million), Norway ($154.7 million), the Netherlands ($976.7 million), Belgium ($312.7 million), Luxembourg ($33.4 million), France ($1 billion), and Monaco ($15.5 million)--amounted to $2.5 billion, or more than twelve times Germany's 1941 U.S. holdings of $198 million.21 The following table shows the geographic origin of the funds, according to the census:
Table 5: Value of Foreign-Owned United
(by continent and country of reported address of the owners, as of June 14, 1941, in millions of dollars)
|With largest countries in region|
|of this, United Kingdom||
|of this, Switzerland||
|of this, France||
|of this, Canada||
|of this, China||
|of this, Philippine Islands||
|of this, Japan||
|of this, Argentina||
|of this, Brazil||
|of this, Panama||
|of this, Mexico||
|West Indies and Bermuda||306|
|of this, Cuba||
|of this, South Africa||
|of this, Belgian Africa||
|of this, Australia||
Only 21 percent of the foreign-owned U.S. assets were owned by individuals: corporations owned 63 percent, and governments 16 percent. Among individuals, securities ($808 million), estates and trusts ($799 million), and deposits ($505 million) predominated, though all three combined amounted to only 16.5 percent of all foreign-owned assets. Corporate holdings in deposits ($2.8 billion), interests in enterprises ($2 billion), and domestic securities ($1.8 billion) were far larger. Of the foreign-owned American securities 75 percent were held by persons in only five countries (the United Kingdom, Canada, Switzerland, the Netherlands, and France), with the majority (70 percent) in common stock, and far less (12 percent and 8 percent, respectively) in preferred stock and corporate bonds. Only about 30 percent of these securities belonged to individuals, while 65 percent belonged to corporations. 22
At the wealthy end of the spectrum, 9,255 "persons" (including corporations), who held total assets greater than $100,000 each, accounted for fully 88 percent ($11.2 billion) of the $12.7 billion total. At the other end, the Census reported 112,399 "persons" with a range of assets of less than $10,000 each, accounted for only 3.3 percent ($427 million) of the total foreign-owned assets counted. In fact, "small holdings of less than $5,000 accounted for...58 percent of the number of persons, and close to 90 percent of them were individuals." 23
Looted Assets and the U.S. Market
Though FFC wanted to prevent the enemy from using assets in the United States, it did not want to hinder legitimate business, and therefore developed a licensing system to monitor and regulate transactions. Controlling readily convertible and transportable assets, in particular securities, currency, and gold, called for different measures.
Securities. Treasury's General Ruling 2, "Transfer of Stock Certificates and Custody of Securities," issued nine days after the proclamation of Executive Order 8389, prohibited transfers of securities involving Danish or Norwegian nationals. The ruling had limited impact since the combined value of U.S. domestic securities held by Danes and Norwegians amounted to only $16.6 million. 24 However, the German invasion of the Netherlands, Belgium, and Luxembourg in May 1940 substantially increased the stakes, as securities worth $358.2 million were in danger of falling under German control. Dutch owners held 89 percent of these securities, and as many as half of the securities were thought to be in the form of readily convertible corporate bearer bonds. 25 "Unless a way could be found to prevent the liquidation of securities seized by the invaders," a Treasury Department summary noted in mid-1942, "tremendous losses would accrue to their legitimate owners, and a tremendous asset would be given to the war effort of the Axis." 26
Initially, FFC recommended the destruction of securities that were at risk of German seizure. In the Netherlands, many owners resisted this approach, for fear of not being able to replace the securities at a later date. The Dutch government also informed the State Department that owing to the military situation it was simply too late to undertake any financial measures at all and that not enough personnel was available to destroy all the securities. In desperation, FFC sent a cable with the instruction to take the securities and "dip them in red wine," 27 thereby making them immediately identifiable in case they were looted.
Such expedients were insufficient to keep Germany from exploiting looted securities registered in the name of an individual. On June 3, 1940, the Treasury Department issued General Ruling 3, extending the freezing control to prohibit the acquisition, transfer, disposition, transportation, importation, exportation, or withdrawal of any securities that were registered in the name of a national of a blocked country. The ruling prohibited U.S. registrars or transfer agents from changing the name in which the security was registered, even if a legitimate transfer of title had taken place before the German invasion. 28
While General Ruling 3 blocked the transfer of enemy-captured registered securities, it did not fully address the direct importation of bearer securities into the United States. To attack this problem, FFC determined that an import inspection system was necessary. On June 6, 1940, it issued General Ruling 5 on the "Control of Imported Securities," prohibiting "the sending, mailing, importing, or otherwise bringing into the United States" of securities. If any securities were physically brought into the country, they were to be immediately turned over to a Federal Reserve Bank.
For implementation, FFC relied heavily upon other agencies, in particular on the Customs Service and the Post Office. Customs inspectors met, questioned, and searched incoming passengers to determine whether they were carrying securities, while postal inspectors examined the incoming mail to make sure that stocks and bonds did not enter the country surreptitiously. 29 Once the Federal Reserve Bank took possession of securities, "they could be released only upon proof, judged sufficient by the Treasury Department, that no blocked country or national thereof had any interest in such securities since the date of the freezing order." 30
Imported securities that could not be released remained in the custody of the Federal Reserve Bank. Yet to prevent undue hardship, the Treasury Department issued General Ruling 6 on August 8, 1940, allowing such surrendered securities to be moved from the Federal Reserve Bank into special blocked accounts (called "General Ruling 6 Accounts") in domestic U.S. banks. This arrangement permitted the completion of certain basic transactions. Dividends from these securities, for example, and even the proceeds from the sale of these securities, could be accrued in these blocked bank accounts, as well as taxes and bank charges deducted.
Unregistered (or bearer) securities falling into enemy hands were particularly troublesome, not only because they had been used extensively before the war by German cartels as a means to hide ownership, but also because General Ruling 3 did not apply to them and General Ruling 5 only applied to blocked countries. Thus, between the time General Ruling 5 was issued in June 1940 and the extension of freezing controls to the neutrals in June 1941, it was possible for Swiss, for example, to continue to export securities to the United States. Then there were the issues of controlling foreign securities that had been issued in and were payable in the United States, and preventing blocked nationals from acquiring controlling interests in U.S. corporations by buying their stocks and bonds.
The Treasury Department addressed many of these problems through certification, an expedient somewhat similar to the European practice of affixing tax stamps to legitimately acquired securities. Treasury's certification (using Form TFEL-2) could be attached to securities "if the owners could prove that they were free from any blocked interest." 31 Treasury also applied this device to securities issued in blocked countries but payable in the United States. 32 Because securities looted abroad might be sold to persons resident in the United States, the freezing order prohibited such acquisitions or transfers as long as these securities were not in the United States. By 1943, this prohibition was relaxed so as to permit the acquisition of securities in Great Britain and Canada, as well as to a more limited extent from within the generally licensed trade areas. 33
Currency, Dollar Checks, and Drafts. In the process of trying to prevent securities from entering the country, U.S. Customs also discovered that currency, particularly dollars, was being brought into the United States, amounting to $3 million worth in Fiscal Year 1943 alone. 34 In 1940 and 1941, the United States was still neutral and no mechanism was in place to impound currency. Foreign Funds Control therefore asked the Collector of the Customs simply to keep a record of the amount of currency arriving and to send a monthly report to the Treasury Department listing sender and recipient. By 1942, the United States was a belligerent and General Ruling 6A, issued in March 1942, added "currency" to the definition of "securities or evidences thereof," thereby enabling FFC to apply similar controls.35 The operating presumption of the Treasury Department was that dollars imported directly from Europe had been looted. 36 Only currency imported from Canada, Great Britain, Newfoundland, or Bermuda escaped the wartime currency import restrictions.37
Controlling the direct importation of securities or currency did not address yet another problem: censorship offices in both the United States and Great Britain were discovering that "substantial amounts of funds" were flowing between Europe and the Western Hemisphere in the form of dollar-denominated checks and drafts. 38 U.S. funds were entering blocked European countries, and dollar checks were being resold in neutral countries. The U.S. Legation in Switzerland, for example, reported that Germany had obtained about $12 million in this fashion, noting that German agents were trying to sell such checks and drafts in neutral countries at a discount in order to acquire Swiss francs and Portuguese escudos. 39 General Ruling 5A (July 7, 1943) thus required a license to collect payment on these kinds of checks, a control that worked in both directions. 40 Checks, drafts, notes, securities, or currency could not be exported to any blocked country unless under license, and all checks or drafts imported after August 25, 1943, had to be sent to the Federal Reserve Bank of New York. There they were held indefinitely, with licenses for their release only granted in very unusual circumstances. 41
Gold. Whether the U.S. government knowingly traded in gold looted from victims begs the prior question of the nature of the gold trade. U.S. policies on gold long predated the war, and the war did not substantially alter them. Until 1934, U.S. currency could be redeemed in gold coin, and by statute the Treasury had to maintain a minimum amount of gold to make redemptions possible. Economic expansion in the 1920s had increased the domestic demand for currency, which increased the purchase of gold from abroad, and turned the United States into "a gigantic sink for the gold reserves of the rest of the world." 42 Having gold (in Fort Knox, for example) helped maintain public confidence in the currency. 43
The economic crisis of the Depression led to passage of the Gold Reserve Act of January 30, 1934, prohibiting the private trade in gold and giving the Treasury Secretary the authority to control all future dealings in gold, including setting the conditions under which gold could be held, transported, melted, treated, imported, and exported, as well as allowing him to "purchase gold in any amounts, at home or abroad...at such rates and upon such terms and conditions as he may deem most advantageous to the public interest." 44 Immediately after passage of the act, President Roosevelt revalued gold, fixing its price at $35/oz. (substantially up from its previous $20.67/oz. price). In effect, this gave the Treasury a paper profit of almost $3 billion, $2 billion of which went into a Stabilization Fund authorized to deal in gold and foreign exchange with an eye to influencing the value of the dollar by buying and selling on the open market.45 The Treasury Secretary thus was given both full power to buy gold and substantial funds to do so.
The consequences of the Gold Reserve Act and the revaluation were immediate. Gold held privately (and by banks) in the United States was turned in to the Treasury Department and added $2.4 billion to its ledgers in 1934 alone, 46 and the increase in the price the United States would pay stimulated mining to such an extent that during the six years after 1934, world gold production rose by two-thirds and U.S. domestic production more than doubled. 47
More importantly, these U.S. changes drew capital from Europe, perhaps in part due to the new price, but certainly because of "the growing threat of Nazism in Hitlerite Germany." 48 In only six weeks from February 1 to March 14, 1934, more than half a billion dollars' worth of gold was imported by the United States, and by 1936, an estimated $3 billion worth of gold had come from France. 49 Gold hoarded in England came onto the London market in 1936 and 1937, and from August 1938 to May 1939 alone, about $3 billion worth of gold came to the United States, $2 billion of which was from the United Kingdom, and perhaps $670 million of that U.K. gold had been transshipped from other countries. 50 In fact, from February 1934 until October 1942, "a phenomenal gold movement" to the United States began, with gold stock increasing "every single month for 8 years" (an average yearly increase from 1934 to 1938 of $1.5 billion worth of gold) in the end amounting to $16 billion worth of gold flowing into the United States. 51
The following table, prepared by the Treasury Department in answer to an inquiry from Senator William Knowland in 1952, lists yearly gold flows to and from the United States:
Table 6: U.S. Gold Flows, 1934 - 1945
millions of dollars at $35/oz.)
Gold Stock 3
in Gold Stock
1. purchases of foreign gold (1933 - 1944) include gold from foreign governments, private holders, mines, refiners and others
2. sales data (1934
- 1939) include some made to non-governmental buyers in the UK
and French gold markets
3. discrepancies between the sum of purchase minus sales and the total listed under Change in Gold Stock are due to omission of data on domestic net receipts (newly mined domestic gold, domestic coin, and secondary gold, less sales to domestic industry). Prewar, the mean value was +176, range +118 (1934) to +231 (1940); during the war the mean value was +45, range -66 (1945) to +196 (1941)
4. total gold stock
includes gold in the Exchange Stabilization Fund
Unmistakably, gold was fleeing Europe before the war, and Europeans were receiving dollars for it. However, most gold was not coming directly from the Axis powers. A table prepared in connection with Stabilization Fund hearings in 1941 showed that from 1934 through 1940, the U.S. imported only $94,000 worth of gold from Germany, $60.5 million from Italy and $692.5 million from Japan.53 Relative to the increase in the U.S. gold stock from 1934 to 1940, all gold imported from Germany, Italy and Japan combined ($753 million) accounted for less than 5 percent of the total increase in the U.S. gold stock. In fact, three-quarters of the gold imported by the U.S. in 1940 came from only three countries: Canada (55 percent), the United Kingdom (14 percent) and France (5 percent).54
The volume of gold coming in troubled Treasury, which held internal discussions about embargoes or other means to stem the flow. But Henry Dexter White, head of the Division of Monetary Research, wrote to Secretary Morgenthau in May, 1939, that "there is very little we can do to reduce gold imports--except promote recovery here."55 White himself regarded gold as "the best medium of international exchange yet devised," one that served to insulate the U.S. domestic economy from foreign economic changes, and the medium of exchange par excellence since "every country in the world will sell goods for gold and no country will refuse gold in settlement of debt or in payment for services rendered."56
White was well aware that it was "the fear of war abroad with its concomitant likelihood of depreciation, strict exchange controls, and possible inflation or confiscation" that was prompting the massive inflow, and that people wanted to protect assets abroad from sequestration or wanted to buy American currency since that allowed them to "have funds in the form that can be easily hidden" from their governments.57 Eighty percent of the inflow was being put into short-term balances, suggesting that the dollars received for gold were being temporarily parked in U.S. accounts, some of which were probably then frozen.
The German invasion of France, Belgium and the Netherlands in May of 1940, prompted Mr. Pinsent, Financial Counselor at the British Embassy, to send a note to the Treasury Department to inquire of Mr. Morgenthau "whether he would be prepared to scrutinize the gold imports with a view to rejecting those suspected of German origin," as Pinsent explicitly feared that the private hoards of Dutch and Belgian gold might fall into German hands.58 In a June 4, 1940 memo, Henry Dexter White explained why the U.S. Treasury did not raise questions about the origin of "German" gold.
First, such gold could readily lose its identity by being used as payment in third countries. If Germany looted gold and resold it, the global cooperation needed to stop this movement simply did not exist. Second, Treasury had consistently taken the position before Congress that "it cannot effectively distinguish gold originating from any one foreign country."59 Third, Germany could claim its gold shipments were of its own prewar stocks, meaning gold would have to be refused not for its title but for political reasons. Fourth, discriminating against gold from Germany "will intensify Germany's propaganda against the usefulness of gold." The most effective contribution the United States could make to keep gold as an international exchange medium, White argued, "is to maintain its inviolability and the unquestioned acceptance of gold as a means of settling international balances."60
Indeed, six months later White would scornfully write of his "adamant opposition to give even serious consideration to proposals coming from those who know little of the subject that we stop purchasing gold, or that we stop buying the gold of any particular country, for this or for that or for any particular reason."61 In early 1941, White was asked again, through an internal Treasury memorandum, to consider the question "whose gold are we buying?"62 but from his memos it is clear that the answer was an "unquestioned acceptance of gold," regardless of origin.
Following application by an individual or business, Treasury issued licenses that could be either Specific (governing a particular transaction) or General (covering broader categories of transactions). General Licenses removed the need for FFC to investigate every transaction, and such licenses were functionally differentiated to apply to persons, geographic regions, or particular types of transactions. Entire categories of transactions were deemed acceptable, such as the payment of interest on securities, managing or liquidating property to meet expenses and taxes, servicing life insurance policies, and even sending remittances to persons in territories occupied by the enemy, though Treasury placed restrictions on those remittances and the amounts could not come from blocked accounts.63
Under "persons," Treasury designated individual nationals of blocked countries who had been residents of the United States for a certain period of time as "generally licensed nationals." These persons, as well as all residents in the United States, regardless of nationality or length of residence, could obtain a certain amount of money for living expenses, even from blocked accounts.64 Treasury also designated certain regions as "generally licensed trade areas." Such a designation permitted transactions to occur without restriction. In a modified example of this approach, Treasury granted general licenses to the four neutral countries (Switzerland, Sweden, Spain, Portugal), with provisos that transactions be certified by government, central bank, or designated agent, and that these transactions were not carried out on behalf of a blocked country or national. Under such a license, a Swiss national in Switzerland could
transfer funds from his account in a bank in New York to Credit Suisse in Switzerland to be used for the payment of goods which he is going to purchase in Switzerland. On the other hand, a German citizen in Switzerland...cannot...transfer funds in this manner for the same purpose.65
Yet Treasury wanted to disrupt economic life as little as possible, and during the war it approved 83 percent of all applications to conduct financial transactions under the freezing order.66 According to the Treasury Department, "from January 1942 to March 1945, transactions in assets totaling over $10 billion were authorized by specific license."67 This $10 billion represented 78 percent of the total amount of foreign-owned assets reported in the 1941 Census ($12.7 billion), and suggests Treasury's main concern was for the 20 percent or so of assets that were suspected of being under enemy control.
In spite of its generally positive approach, Treasury was ready to exert more onerous controls particularly over businesses owned by or which had close ties to enemy companies. Before 1933, German companies had commercial arrangements with American companies, such as exclusive sales contracts and patent-sharing, or had established subsidiaries in the United States, and while some of these involved legitimate business practices, others used mechanisms that lent themselves to concealment.68 For example, shares of stock that represented majority ownership and control of an American company would be transferred to holding companies in various countries in the form of bearer shares. Because the holding company's stock was both frequently traded, including to other holding companies, and ownership was anonymous, it was not possible to establish who actually controlled the shares.69 The Treasury Department also feared that German interests might use Swiss or Dutch companies as fronts for clandestine operations inside the United States.
Treasury possessed a variety of means to control blocked businesses. Its reporting requirements obliged such businesses to file affidavits containing detailed information about their organization, directors and officers, their relationships to other enterprises, their principal customers and their capital structure. Armed with this information, Treasury could determine whether or not to license a given business to continue operations. As a condition for granting such licenses, Treasury could mandate changes in organizational structure, require that executives or employees be dismissed, or make the enterprise break off relations with certain customers. Treasury could also deny the renewal of a license.70 More intrusively, government "intervenors" could be placed directly inside firms to supervise or reorganize the business, including severing contracts and preventing trades, or liquidating stock. By withholding a license, Treasury could prevent a company owned or controlled by an Axis power from operating at all and could force a sale of its assets: government representatives would be placed on the premises to supervise the liquidation. By mid-1942, Treasury had liquidated about 500 enterprises, many of them banks and insurance companies,71 and the funds remaining from the sales, after creditors had been paid, went into blocked accounts.72
In short, exerting control over foreign funds entailed a variety of discrete if interconnected acts. The basic policy decision to freeze assets predated the U.S. declaration of war, but each German act to invade or control new territory engendered a new response from Washington as well as an additional presidential executive order. For Treasury to exert control over foreign-controlled assets necessitated first gathering detailed information about the extent and ownership of such assets through a census. Then a licensing system to allow scrutiny of asset transactions had to be devised and implemented. In practice, the vast majority of transactions were permitted, albeit Treasury had at its disposal intrusive control measures when it suspected enemy interests might be involved in a transaction.
Readily fungible assets like bearer securities and gold were particularly troublesome, as the U.S. wartime expedient of certifying the legitimacy of transactions in such assets obscured their looting by Germans, or the duress under which such assets had changed hands (or been converted to cash) in Europe. The restitution of such assets even to their countries of issue would remain a contentious matter long after the end of the war. Yet in all of this, the U.S. concern was to keep property out of the hands of the enemy, and if possible even to preserve the property and rights of legitimate asset holders. Legitimate asset holders may well have included victims, though the focus was on the enemy, and explicit distinctions between victim and non-victim were rarely drawn during the war.
Aliens, Nationals, Enemies, Friends
The specific concern for victims was obscured, though not entirely absent, during the war because political interests and overlapping definitions were at play in domestic law and policy. Treasury was concerned about foreign enemies trying to liquidate assets in the United States, but the Justice Department cared about enemies in (or trying to enter) the country to subvert the U.S. war effort. As one consequence of these differing concerns, definitions of "enemy" varied from one act to another, in turn creating contradictory regulations that had the effect of increasing the discretionary power of government.73
Victims and non-victims alike faced a patchwork of policy and regulations. For an individual, much depended upon whether he or she was classified as enemy or friendly alien, naturalized or non-naturalized, resident or refugee, or as domiciled in the United States or abroad. Yet cutting through these categories were high-level political and legal judgments that certain groups posed no threat, that demonstrated loyalty meant more than formal citizenship, and that even stateless refugees had legal rights. In short, those who had been victimized abroad found themselves categorized in the United States in ways that limited their liberty, including access to their assets, but they could equally well find that some of the very distinctions that were drawn for other wartime purposes worked to their benefit.
Numbers and Definitions
The U.S. government's 1940 Alien Registration Program found 4.9 million aliens in the United States, more than 70 percent of whom (or 3.4 million) had arrived before 1924. About 73 percent of these 4.9 million were from "Europe." 74 The five largest nationality groups, accounting for 60 percent of the European total were, in descending order, Italians, Poles, Russians, Germans, and British. Thus, at the beginning of World War II there was a substantial cohort of long-term resident aliens who had come from Europe.
As for those from Axis countries, the 1940 Census found 1.2 million residents of German birth, 1.6 million of Italian birth, and 127,000 of Japanese birth. In each group, only a minority were not naturalized American citizens. 75 The same was true of individuals from Axis-invaded countries: a 1942 tally found that only 36 percent of the 2.3 million foreign-born residents from these countries were still aliens. 76 A 1942 estimate put the number of refugees from Europe who had arrived in the United States since 1933 at 250,000. 77 Within the United States, of those born in Axis countries, only about one-third still had Axis citizenship.
But executive orders made it unclear whose assets were meant to be controlled. The first freezing order in 1940, applied to "nationals" of a blocked foreign country, with "national" defined as "any person who has been domiciled in, or a subject, citizen, or resident of a foreign country at any time on or since the effective date of this Order." 78 German Jews and some other refugees made stateless by the Nazis did "not cease to be nationals of such country merely by reason of such cancellation or revocation of citizenship," at least so far as Treasury was concerned. 79 But the freezing order also gave the Secretary of the Treasury full power to determine "that any person is or shall be deemed to be a 'national' within the meaning of this definition." 80 Thus a "national" by the first two statements might be defined by the criteria of former or present foreign domicile and foreign citizenship (even if revoked), yet by the third statement a national could be defined simply through the discretionary power of the Secretary. Subsequent executive orders did not clarify matters. They defined "national" to include foreign nationals who were resident in the United States as in June 1941, or, as in July 1942, defined "national" as "any person in any place under the control of a designated enemy country" with which the U.S. was at war. 81 The first appeared to make citizenship key regardless of domicile, while the second seemed to make both citizenship and domicile irrelevant since it was enemy control that mattered. By contrast, Treasury's General License No. 42 declared that any individual residing in the United States as of February 23, 1942 (including a stateless refugee) was a generally licensed national. Not only did this allow for liberties over assets, it meant that domicile rather than citizenship mattered. 82
There are several explanations for this apparent arbitrariness. First, definitions are confined to the act or regulation in which they appear, and Treasury's interpretations do not appear to have tracked executive orders. Second, contradictions in definitions permitted a kind of "ad hoc blocking" to be imposed if necessary. 83 Third, the point of freezing was to control a potential problem rather than to prohibit all trade. Foreign Funds Control "never intended to subject all individuals within the United States who were nationals as defined in the Order" to its control but rather to draw a distinction between the smaller group of those suspected of "carrying on activities inimical to the public interest" and the much larger group of those "whose activities were clearly above suspicion." 84
Alien Enemies: Restrictions and Rights
The freezing orders and Treasury interpretations define "nationals"--not "enemies." Understanding the difference prompted one commentator in 1943 to note that "Congress may want to make a distinction in favor of those German nationals who are the enemy's most cruelly persecuted victims and to whom it must seem a bitter irony to find themselves treated as our enemies." 85 The reason for this assertion was that the Alien Enemy Act of 1940 had declared that all resident natives, citizens, denizens, or subjects of a country with which the United States is at war and who are not naturalized are liable to be apprehended, restrained, secured, and removed as "alien enemies" in or from the United States. 86 That included Germany's "most cruelly persecuted victims" who had fled to the United States as refugees. The Trading with the Enemy Act had by contrast defined an "enemy" as a person resident within the territory with which the United States is at war, which meant that "enemy" was defined as a nonresident of the United States. 87 As a consequence, "alien enemies in this country [U.S.] are not considered enemies for the purpose of trading with the enemy measures." 88
In December 1941, President Roosevelt issued three proclamations placing restraints on aliens of German, Italian, and Japanese nationality. The restrictions included prohibitions on owning cameras, short wave radios, firearms and explosives, exclusion from living in certain areas, travel restrictions (alien enemies were not permitted to take airplane flights and needed written authorization for trips outside their district), restrictions on changing name, residence, or employment, and a requirement to apply for and carry identification certificates at all times. 89
Though the intent of these restrictions was clearly to restrict subversion, the difficulty, as the Commissioner on Immigration and Naturalization Earl Harrison noted in April 1942, was that "alien enemies" thereby included
persons who have actually fought in battle against Hitler forces; it includes a great many who have bitterly opposed Hitler and Nazism and Fascism in civilian life for years; it includes many who have been in foreign concentration camps, had their property appropriated and their German citizenship revoked; it includes some who...[have] been classified as friendly aliens in England; it encompasses many who do not recall any country other than the United States and whose American born children are now serving in the American army. 90
Others, too, reiterated Harrison's point at the time. 91 The implication was that nominal citizenship mattered much less than loyalty to the United States, particularly if it was "honestly-determined loyalty of the individual rather than his assumed loyalty." 92
The category of "enemy alien" also was not as comprehensive as it might have been. Austrians, Austro-Hungarians, and Koreans, for example, were not defined as alien enemies, nor were former Germans, Japanese, or Italians who had become naturalized citizens of neutral or friendly countries. 93 Executive Order No. 9106 (March 20, 1942) excepted persons Attorney General Francis Biddle had certified, after investigation, as loyal to the United States, specifically for the purpose of allowing such persons to apply for naturalization. 94 On Columbus Day (October 14) in 1942, Biddle also announced that the more than 600,000 resident Italian aliens would henceforth be exempt from the restrictions placed on enemy aliens, and subsequently issued the relevant orders making it so. 95
Soon after Pearl Harbor, Attorney General Biddle made several strong statements in favor of tolerating "all peaceful and law-abiding aliens," reassuring non-citizens that the U.S. government would not interfere with them "so long as they conduct themselves in accordance with the law." 96 Common law and legal precedent established the general rule that for aliens, "lawful residence implies protection, and a capacity to sue and be sued" unless that right was expressly withheld by law. 97 This general rule was reaffirmed in the case of Kaufmann v. Eisenberg and City of New York (1942) 98 with the words "the right of a resident enemy alien to sue in the civil courts like a citizen has been accorded recognition under the generally accepted rule," a ruling reaffirmed by Attorney General Biddle who stated in a Justice Department press release that "no native, citizen, or subject of any nation with which the United States is at war and who is resident in the United States is precluded by federal statute or regulations from suing in federal or state courts." 99 Thus, even those formally designated as "enemy aliens" could have their day in court. 100
Despite long residence in the U.S. or demonstrated loyalty, even some "friendly aliens" saw their property confiscated. 101 However, their right to just compensation, in keeping with the Fifth Amendment, had been affirmed several times by the Supreme Court in the 1930s and was reiterated after World War II. 102 Lower courts clearly affirmed the right of friendly aliens to be given the same treatment as citizens, to recover their property in kind or sue for its return, 103 or be provided administrative means to do so. 104 Thus, being defined as "friendly" rather than "enemy" was important for the recovery of assets, and all "aliens" had recourse to the courts.
Aliens and Real Property
Though the Treasury Department granted "generally licensed" status to resident aliens in February 1942, New York and some other states reserved to themselves the power to escheat property, which meant the state took control over real property upon the death of the owner, particularly if there were no heirs to claim it. 105 Under Section 10 of New York's Real Property Law (1913), a statutory provision held that "alien friends are empowered to take, hold, transmit and dispose of real property within this state in the same manner as native-born citizens," 106 and Judge Benjamin Cardozo in Techt v. Hughes (1920) subsequently defined "alien friends" as "citizens or subjects of a nation with which the United States is at peace." 107 That explicitly excluded citizens of countries with which the United States was at war, 108 so that a real property title held by an "alien enemy" in 1942 would "upon his death immediately escheat" as long as there were no heirs. 109 The New York Public Lands Law had "provided machinery whereby the putative 'heirs' of an alien enemy may secure, at a very favorable price, a release of escheated lands." 110
But all was not as it appeared, and key political figures were quite aware of predicaments stateless refugees could face. Already on July 1, 1942, New York State Attorney General Bennett, in an informal letter opinion to the Jewish Agricultural Society, Inc., suggested that those deprived of German citizenship by German law should be regarded as "alien friends." 111 By March 22, 1944, the New York State Legislature had abolished the disabilities "alien enemies" had under the New York Real Property Law by the simple expedient of deleting the word "friends" from the statute. 112 In making this change, the legislature may have been responding to the many long-term resident aliens in the state who "were unquestionably 'alien friends' " but were facing an inheritance law that was at best "only a dubious means of enriching the state at the expense of harmless and innocent people." 113 The result of the legislative change was to permit all aliens to hold and will to heirs real property in the same manner as citizens. 114 While this may not have prevented the escheating of Holocaust victim property when there were no heirs, it also indicates that even during war, states removed some of the legal disabilities aliens faced. Inheritance, of course, became complex when it was a matter of alien heirs resident in the United States from decedents who were citizens of enemy or enemy-occupied countries, though authentication systems for foreign records were developed even in the New York State court system. 115
Victims in Europe
FFC took positive steps to assist victims in Europe. In 1942, initial inquiries about licenses were made
for the purpose of providing funds for getting persons out of enemy or enemy occupied areas....Thereafter we [FFC] approved applications for licenses to effect remittance in reasonable amounts to neutral areas on behalf of prospective emigrants from enemy territory....During 1943, we were receiving reports of the character of the German treatment of refugees, particularly Jews, throughout the areas under their control. 116
Despite Treasury restrictions under General Ruling 11 that explicitly prohibited communication with enemy territory as well as financial transactions with those in enemy territory, FFC "re-examined our general trading with the enemy policy...to permit operations designed to bring relief to particularly oppressed groups in enemy territory." 117 These included funding underground organizations, supporting U.S. organizations that could conduct relief operations in enemy territory, and establishing safeguards to keep funds from falling into enemy hands. "In view of the policy of the enemy to annihilate certain minority groups either by slaughter or starvation, operations to bring relief [would further the] fundamental objectives of the United Nations." Accordingly, FFC decided it "should permit certain responsible groups to enter into arrangements to bring some relief to groups in enemy territory." 118 FFC thus authorized the Legation in Bern to give the World Jewish Congress a license permitting it to obtain local currency to help in the evacuation of refugees, and allowed it to communicate with enemy territory. This license was subsequently amended to permit acquiring currency "from persons in enemy or enemy-occupied territory against payment in free currency rates" in order to "assist in the evacuation of victims of Nazi aggression," a policy cleared through "Treasury and other Departments of the [U.S.] government." 119 The Treasury Department deliberately made an exception to its restrictive policies in order to provide aid to victims.
"Vesting" Assets and the Office of Alien Property Custodian
Creation of the Office of Alien Property Custodian
Congress considerably expanded the President's regulatory power when it passed the First War Powers Act on December 18, 1941, giving the Chief Executive the power to "vest" (seize, or take over the title to) the property--including businesses--of any foreign country or national. 120 While the power to freeze left the title with the original owners, with "vesting," title passed into the hands of the U.S. government, with the declaration that seized property could be used for the benefit of the United States. 121 The Office of Alien Property Custodian (APC) would be given far more direct power over businesses than the Department of the Treasury had been granted, though it would be exercised over far fewer businesses. 122
President Roosevelt could not easily bring APC to life because the precedent was inauspicious. An Alien Property Custodian had been appointed during World War I, but the office had been scandal-ridden, and one custodian had even gone to prison in the wake of a postwar congressional investigation. 123 The first Custodian's Office was abolished in May 1934, but its remaining functions were still being carried out by the Alien Property Division in the Justice Department in early 1941. 124 The Attorney General lobbied Roosevelt to have a new custodian appointed in the Justice Department, but the Secretary of the Treasury did not want the functions of FFC to be undermined, and any new custodian would also have to take over the alien property issues that still remained from World War I. The APC was finally launched as an independent agency on March 11, 1942 by Executive Order 9095. It was placed in the Office for Emergency Management of the Executive Office of the President and its function was to seize or vest and take over the ownership of certain types of enemy property that was not already frozen or blocked and regulated by the Treasury Department.
The Process of Vesting
Within APC, an Investigations Division looked for property that should be seized. 125 Much of its information came at first from Treasury's 1941 Census of Foreign-Owned Assets, though the Custodian's office also relied on the Justice Department, OSS and other intelligence agencies, the Securities and Exchange Commission, and the Patent Office. 126 The Investigations Division then made recommendations to the Executive Committee, chaired by the Deputy Custodian, and that committee made recommendations to the custodian. The final decision to vest lay with the custodian.
If a vesting order was issued and published in the Federal Register, that transfer of title was immediately and summarily effectuated. 127 Vesting, however, was not the only option available, for the custodian could also provide for "direction, management, supervision, and control" without transferring ownership, an option particularly suited for the vesting of business enterprises. General Orders, usually relating to specific classes of property, were also issued, requiring specific action on the part of persons who held an interest in the asset in question. 128
The Custodian's Office also established mechanisms so that if a mistake was made in the decision to vest, "every American and friendly alien [was] given opportunity to show that his rights [had] been infringed." 129 The operating principle was that "mistakes against our friends could be corrected, but mistakes in favor of our enemies might be fatal." 130 Any person other than "a national of a designated enemy country" could assert and file a claim with the Alien Property Custodian, requesting a hearing within a year from the time the vesting order was issued. Claims were heard by the Vesting Property Claims Committee. The committee, set up on July 22, 1943, found itself busy, processing more than 2,000 claims within a year of its establishment. 131 Once the committee had reviewed a claim, it passed its recommendations to the Alien Property Custodian. Some contemporaries remarked that because the custodian had appointed the committee members, this process made him "judge and defendant in his own case." 132 Others, however, believed that the procedure met "the basic constitutional requirements for administrative review." 133
become of property taken under control was not always clear, and
the wording of the vesting orders themselves left open what would
happen to the property. The orders read that property and its
proceeds "shall be held in a special account pending further
determination of the Alien Property Custodian." That determination
might include returning the property, returning the proceeds from
the sale of the property, or paying compensation for it "if
and when it should be determined that such return should be made
or such compensation should be paid." Vesting, however, could
also mean property could be "administered, liquidated, sold
or otherwise dealt with in the interest of and for the benefit
of the United States," and that could mean a determination
that nothing would be returned. 134 It was also possible
to interpret the power to vest merely as an act of custody. The
press release accompanying Vesting Order No.1 (Feb. 16, 1942)
stated that vested property was to be considered as "sequestered."
135 The precedent of World War I could be read two ways
as well, either implying confiscation--since the Supreme Court
had held in 1924 that the end of World War I did not bring with
it a right to have property returned 136 --or implying
a return of the proceeds, since alien properties had been sold
but most of the proceeds
subsequentlyreturned to the former owners. 137 The ultimate disposition of vested property remained unclear during the war, and in any case was a matter for Congress to decide.
Faced with this uncertainty over eventual disposition, the Alien Property Custodian equivocated. 138 In the case of vested businesses, some were sold but others were run as going concerns, sometimes with salaried employees of the APC acting in supervisory or directorial capacities, as much for lack of skilled and competent managers as out of fear of enemy influence. On the other hand, the Custodian's Office really did not want to assume the direct responsibility for everything from methods of production to labor relations, arguing that "activities of this character are foreign to the effective operation of the Custodian's Office as an agency of the government." 139 Assets were also treated selectively, since not all assets were readily convertible into cash, or even if they were, equally valuable. Thus, patents were ordinarily vested but mortgages and life insurance policies were not. The general rule of selling vested property at public sales (by General Order 26, of June 9, 1943) by sealed written bids was hedged with all kinds of exceptions: property worth less than $10,000 might be sold privately or not advertised for sale; brokers might be used in exceptional circumstances; some classes of persons (such as those on the Proclaimed List) would not be permitted to buy; perishable commodities or property that was expensive to retain might be disposed of through privately arranged sales; and some property could not find willing buyers at the assessed value.
The most useful and pragmatic solution, the APC argued, was to convert vested property (other than patents and copyrights) into cash and hold it in separate accounts, pending Congressional decision about settlement, and the decision to sell at the best price was compatible with a decision to provide full compensation since "the original owners are in general interested not in specific pieces of property but in the economic value of their property as a source of income." 140 Whether this assumption was justified, at least in 1944, "it seems certain...that provisions will be made for the return of property to nationals of non-enemy countries." 141 The provision of separate accounts of course also made it easier to return vested property.
Evaluating the Property Taken Under Control
Because so much property in so many different asset categories was seized from enemy nationals for the war production effort, it is likely that some victims' assets were inadvertently taken in the process. Knowing the extent of the value of all assets taken was important at the time: it provided a basis for Congressional decisions about future disposition. For individuals or companies whose assets were seized, the value would become part of the claim for return that could be filed, including for the return of property seized in error. The vesting program is also one of the instances where the property of individuals was taken by the U.S. government, though there is evidence that some of that property was also returned. That intellectual property formed a large part of the assets seized adds still another dimension.
In many ways, the APC resembled a holding company, not only because it controlled assets of considerably greater value than its ownership equity in them, but also because of the wide variety of property involved. 142 The Custodian was the majority stockholder of corporations producing cameras, dyestuffs, potash, pharmaceuticals, scientific instruments, and alcoholic beverages, but was also in charge of guardianship estates of Japanese children born in the United States who had been sent to Japan for their education. The Custodian's Office held the largest patent pool in the country, and it vested over 200,000 copyrights during the war. It also controlled dairies, banks, and retail stores. It was the successor to the enemy heirs of more than 2,000 American residents whose estates held cash, real property, jewelry, securities, and other valuables, but it was also in charge of "bankrupt enterprises, damaged merchandise, rural wasteland, and bad debts."143
Copyrights, Trademarks, and Patents
Copyrights, trademarks, and patents are structurally similar, as they protect an exclusive legal right, whether to make, use, or sell an invention (in the case of a patent), to reproduce, publish, and sell a literary, musical, or artistic work (in the case of a copyright), or to reserve the use of the owner as maker or seller (in the case of a trademark). Put more precisely, at least in the case of a patented invention, it "confers the right to secure the enforcement power of the state in excluding unauthorized persons, for a specified number of years, from making commercial use" of it. 144 In the context of the U.S. control of assets during wartime, however, a difference was drawn in practice, because while only selected copyrights and trademarks were vested, "all patents of nationals of enemy and enemy-occupied countries" were vested.145
Copyrights. U.S. copyright protection was limited in scope for foreign holders, largely out of protectionist impulses. In the early 1940s, Congress had
sought copyright monopoly for United States authors in other countries; but reciprocal protection in the United States could only be obtained on condition that for the most part copyrighted works should be manufactured in the United States. Thus, the text of a book published in another country would have to be re-set in type and wholly reproduced in this country or else be open to piracy by publishers here without any legal remedy by the holders of the violated copyright.146
As a result, even before APC began to vest copyrights, protection for non-U.S. authors was weak unless they had prewar agreements to produce their works in the United States. In any case, the freezing order had included copyrights, trademarks, and patents, and they had to be reported in the 1941 Census of Foreign-Owned Assets even if the value was less than $1,000.147
For the custodian, the operative principle in deciding to vest a copyright was whether a work had financial value or was of importance to the war effort. The latter reason justified taking copyright title not just from the nationals of enemy countries but also from nationals in enemy-occupied countries, which of course meant that the copyrights of victims might also be seized as no distinction was drawn at the time of vesting.148 In a tally covering the wartime vesting period (March 11, 1942, to June 30, 1945), 120,690 of the nationally identifiable copyright interests vested were in fact for sheet music, 82 percent of which were in the hands of French and Italian music publishers. This is why almost all of the $1 million in copyright royalties paid and collected by the custodian in this period were the result of prewar contracts, and why the lions' share went to French (49 percent) and Italian (17 percent) publishers and copyright holders.149 Thus, Claude Debussy's Clair de Lune, used in the film Frenchman's Creek, brought royalties to the APC, as did performances of Puccini's La Boheme, Tosca, and Madame Butterfly. Even the German war song "Lili Marlene" brought in $10,000 in royalties by 1945 under the 23 licenses that were granted for its use in films, radio, on stage, and as sheet music.150
A more direct connection to the war effort was the licensing and republication of important German books and periodicals in metallurgy, physics, mathematics, medicine, and chemistry.151 One of the most significant works was Friedrich Beilstein's Handbuch der organischen Chemie, originally published in 59 volumes at a cost of $2,000, but now made available in the United States in a photo-offset reprint for only $400, and a work on which the Custodian's Office collected nearly $41,000 in royalties.152 By January 1, 1945, the Office had also "reprinted one or more volumes of approximately 100 different scientific periodicals, chiefly German," mostly for industrial concerns or research institutions and universities. The republication of articles from Die Naturwissenschaften and the Zeitschrift für Physik were regarded as "one of the factors which made the atomic bomb possible" by some of the American scientists involved in its development.153
However, the most surprising is the list of European authors whose works were vested, including Henri Bergson, Karl Capek, Madame Curie, Georges Clemenceau, André Gide, André Malraux, Guy de Maupassant, Baroness Orczy, Romain Rolland, Edmund Rostand, and Georges Simenon. "Among French books, the gay Babar elephant stories for children enjoy great popularity," one learns, and "the Seven Gothic Tales and Winter's Tales, by the Danish Baroness Blixen, earned substantial amounts in royalties collected by the Office."154 However, Baroness Blixen's copyrights, including those to Out of Africa, were returned to her at the end of 1950, along with $33,558.67 in royalties.155
The number of victims whose copyright royalties were seized is unknown, but those who were victimized by the Nazis might have taken some satisfaction in knowing that one particular author did not see any of his royalties. Adolf Hitler's Mein Kampf, a work first published in the United States in 1933, had its copyright vested, and the royalties--totaling $20,580 by June 30, 1945--were held in an account in the name of Franz Eher, the publisher of Hitler and the Nazi Party. A dry note was added to the Custodian's 1944 Annual Report that "the ultimate disposition of Hitler's royalties, as of all other property in the hands of the Custodian, remains to be decided by the Congress."156
Trademarks. Trademarks, as devices that on the one hand imply a right to exclude others from using a name or symbol, and on the other hand try to provide assurances of goodwill (or an absence of deceptive practices) on the part of a business, are difficult to value in terms of dollars, let alone for war purposes. Dealing with trademarks nevertheless was part of a strategy to encourage a negative attitude towards the enemy:
A trademark belonging to an Axis business enterprise represents an investment in good will, and is part of that enterprise's enduring roots in the country. Disposition of an enterprise should include the disposition of the trademark as well. Destruction of a trademark might be the best method of disposition.157
Anti-enemy sentiment was fanned by assertions such as "every time an American bought a box of headache remedy with a certain trademark a few cents were added to the German coffers,"158 whether or not such statements were accurate.
Many trademarks went unused, as was true of the more than 7,000 trademarks in the names of nationals in enemy or enemy-occupied countries that were registered in the United States in 1944. By June 30, 1945, only 412 trademarks had been vested, 325 (79 percent) of which were owned by vested enterprises, and 357 (87 percent) of which were German.159 An opinion of the General Counsel of the APC on July 22, 1943, stated bluntly that "unless the business and good will in connection with which a particular trademark is used are vested, a vesting of the trademark gives the Custodian nothing."160 Furthermore, 47 percent of the vested trademarks were for cosmetic and soap products, and only 27 percent were for products, such as medicines, pharmaceuticals, chemicals, and scientific appliances, that were potentially useful for war purposes.161 By June 30, 1945, only $568,000 had been collected by the APC in trademark contracts.162
Patents. Patents and their role in controlling the market through monopolies and cartels had been an issue long before the war. "The interchange of patents between American and foreign concerns," one journalist had argued,
has been used as a means of cartelizing an industry to effectively displace competition. The production of...beryllium, magnesium, optical glass and chemicals has been restrained through international patent controls and cross-licensing which have divided the world market into closed areas.163
Indeed, the control or restraint of world trade through patent arrangements formed a prominent part of the Kilgore Committee hearings in the Senate in 1943 and 1944, with numerous antitrust cases subsequently filed against U.S. companies alleging "German control over our industry."164 In light of prewar arrangements between I.G. Farben and Du Pont (1925) and between I.G. Farben and Standard Oil of New Jersey (1927), such concerns appeared warranted 165 --though the warnings of the danger sometimes verged on the hysterical.166 Custodian Leo Crowley made clear the continuity between prewar cartels and wartime use of patents when he told the Senate that "the primary purpose of vesting and administering foreign-owned patents is to break any restrictive holds which these patents may have on American industry, particularly restrictions which may operate to impede war production."167
Soon after the Alien Property Custodian's office was established, it launched an investigation into patents, patent applications and patent contracts, in order to vest all that were "owned by persons in enemy and enemy-occupied countries," other than those in which a bona fide American interest existed. General Orders 2 and 3 (both June 15, 1942) of the Alien Property Custodian had already required the filing of a report by anyone claiming right, title, or interest to a patent granted to a "designated foreign national" since January 1, 1939, as well as a declaration that the patent holder was not at present residing in an enemy or enemy-occupied country. Armed with this information, the Office was able to exclude patents that needed more investigation to determine how much they were controlled by American interests. On December 7, 1942, the President sent a letter to the Custodian directing his Office "to seize all patents controlled by the enemy, regardless of nominal ownership, and make the patents freely available to American industry, first for war purposes of the United Nations, and second for general use in the national interest."168
By the end of 1942, about 35,000 patents "presumed to be enemy owned or controlled" were vested.169 A comprehensive list, organized into 110 different classifications, indicated a total of 36,675 patents vested by January 1, 1943, with the classifications ranging from a high of 1,998 patents vested in "radiant energy" and 1,607 in "chemistry, carbon compounds" down to two patents for fences and a single patent for needle and pin making.170 By June 30, 1945, a total of 46,442 patents, patent applications, and unpatented inventions from nationals of enemy and enemy-occupied countries had been vested.171 Of these 46,442, the vast majority (42,726) were patents, 64 percent of which were held by German owners. By June 30, 1945, about 6,000 of these 42,726 patents had expired, and patents held by Italian nationals as well as by Europeans in liberated countries were no longer vested after September 1944, so the effective number of "live patents" by 1945 was around 36,700, close to the number of patents vested by the end of 1942.172 The Census of Foreign-Owned Assets in 1941 had indicated a total of around 65,000 foreign-held patents and agreements related to patents,173 so only slightly more than half of all foreign-held patents were actually vested by the APC.
In making them "freely available," patents held by enemies were made royalty-free (so no profit went to the enemy), nonexclusive, and revocable (so no one using the patent could benefit from the value accruing to an exclusive right), and were licensed after the payment of a small fee. All of this was "tantamount to the destruction of the right."174 For patents held by nationals of enemy-occupied countries, licensing was more complex. Initially, the Custodian issued royalty-free licenses for the duration of the war plus six months, but after several governments-in-exile protested, in 1944 this policy was changed to provide for licensing with "reasonable royalties" from the date of licensing, unless it was a license for war production. As countries were liberated, the policy changed again "because the nationals of the liberated countries now could carry on negotiations themselves" over patents.175
The total number of licenses actually granted under patents vested from nationals of enemy countries was very small, likely because of the complications that were feared once the patents were returned to their owners at the end of the war.176 APC made considerable efforts to let potential users know about the technical information in the vested patents,177 but the Office was hampered by the nature of patents themselves: many patents are taken out based on laboratory findings rather than commercial applicability, patents become obsolete, patents can be unworkable owing to lack of resources, and patents can be encumbered by prewar contracts and commitments. Thus in practice, only about two-thirds of the vested patents were licenseable, and of these 22,000, licenses were in fact granted under only 7,343 different patents--2,000 of them to a single firm.178
But if only about a third of the available patents were actually exploited in the United States, the following (selected) list of products manufactured under vested patent licenses by the end of 1944 gives a sense of which patents were of greatest interest: 42 million gallons of nitration-grade toluene (for explosives and aviation fuel), 66 million pounds of processed tin, 23 million pounds of polyvinyl chloride, 320 thousand feet of steel cable, 500 propeller blades, 450 thousand barrels of cement, and 110 thousand dozen pairs of ladies hosiery.179
A different list from mid-1943 highlights those patents that in some manner were used to create products that contributed to the war effort:
Typical licenses already issued are for high explosives, collapsible boats for the Navy, fire-fighting material, power transmission, intermediates for pharmaceuticals, a magnetic alloy composition, aluminum production, surgical bandages, electrical current amplifiers, synthetic resuscitants, machine tools, camera equipment, and die presses and machines for stretching and drawing metal.180
That not more than a third of the available vested patents were used was attributed to the revocability of the licensed patents, the absence of exclusivity and royalties, or the use by "big business" of these patents. In the view of the Custodian's Office, the "real explanation for the existence of unused vested patents is that they are not commercially valuable" even if they were helpful in certain aspects of war production.181 But patents had to be put to work, as "our friends in the occupied countries would hardly have us do less than to turn their patent rights into active weapons of warfare for the defeat of their oppressors."182 Thus, even if victims were seeing their patent rights seized, the APC wanted to reassure them that they were being put to good use.
By the end of June, 1945, $8.3 million in patent royalties had been collected under vested patents and patent contracts, two-thirds of them German. Half of the $8.3 million total had accrued prior to vesting,183 indicating that the APC was merely continuing with freezing and immobilizing the financial assets represented by patents, while it tried to disseminate the information patents contained for use in manufacturing and in the war effort.
Businesses, Real and Personal Property, Estates and Trusts
Other property categories vested by the APC were easier to estimate in monetary terms than patents or copyrights. The following table highlights the major asset categories:
Table 7: Net Equity Vested by the Custodian,
Ranked by Largest Type of Property 184
(Domestic Assets Only, As of June 30, 1945)
|By Specific Type of Property||
|Vested businesses: stock||
|Vested businesses: equity||
|Estates and trusts: trusts under wills||
|Estates and trusts: decedents' estates||
|Vested businesses: notes/accounts receivable||
|Estates and trusts: inter vivos trusts||
|Personal property: bonds||
|Real property: real estate||
|Personal property: notes, claims, credits||
|Estates and trusts: guardianship estates||
By Category Percent
|Vested business enterprises||
|Estates and trusts||
|Other (real and personal property; royalties)||
By Nationality of Former Ownership
Businesses. Business enterprises in the United States were the single largest individual form of property in dollar terms controlled by nationals of enemy countries--$151 million of the $208 million vested in the Custodian.185 A total of 408 enterprises, 71 percent of them corporations, with assets amounting to $390 million, had their interests vested from 1942 to 1945. Of these 408, 200 (49 percent) were German-owned, 169 (17 percent) were Japanese-owned, and 33 (8 percent) were Italian-owned.186 Control was typically exerted through the acquisition of enough voting shares to exert dominant control, such that 61 percent of the 408 enterprises had 50 percent or more of their voting stock controlled by the Custodian.187 In terms of categories of enterprises, those with the largest book value were chemical manufacturers (21 companies, most of them German, with total assets worth $162 million), but the largest single group was in wholesale trade (153 companies, mostly small, with total assets worth $45 million).188 So though chemical manufacturers accounted for only 5 percent of the companies, they represented 41 percent of the assets vested, while wholesale trade accounted for 37 percent of the companies but only 11 percent of the assets vested.
If a business could be run profitably and perform useful functions, it was operated as a going concern. As of June 30, 1945, 117 of the 408 enterprises were operating with total assets of $257 million as of that date, the most important of which (in terms of total sales as well as sales of war products) were again the chemical manufacturers.189 The remaining 291 enterprises were placed in liquidation, many of them having depended on trade with enemy countries (e.g., import/export firms, steamship companies). Relative to the amount realized, it can hardly be said that the United States profited much from the seizure of these businesses.190
Real Property and Tangible Personal Property. A small amount of real property ($4.3 million) was vested during the wartime operation of the Custodian's Office, of which German-owned properties were the largest ($2.3 million) by nationality. Most (worth $3.5 million) of this property was urban and consisted of either single dwellings or commercial buildings (319 and 96 of the 622 properties vested, respectively), though ten small hotels and rooming houses as well as two Japanese Shinto temples were also vested.191 One-third of the properties were on the eastern seaboard, though there were properties scattered across the country.192 Sales proceeds from properties, though 11 percent better than their appraised value, yielded only slightly more than $2 million.
As with the sale of vested businesses, there were difficulties in selling vested property, ranging from title insurance problems (affecting 81 real estate parcels) to political considerations (the sale of 98 real estate parcels vested from Italian and Austrian nationals, for example, was discontinued at the request of the State Department).193 Vested tangible personal property was even more difficult to sell. The Custodian's annual report for 1943, for example, noted with some satisfaction that the Office had found "two carloads of steel bars owned by an Italian concern, which had stood on a siding in the New Jersey freight yards for almost 2 years," and that these were thereupon vested.194 But though the Custodian "offered at public sale 77,161 pounds of steel bars vested from Alfa-Romeo" on April 3, 1944 (possibly these same bars), valued at $24,000, no buyers could be found at that price. In fact, the highest bid the Custodian received was $825. The warehouse where the bars were stored was urgently needed for other war purposes, and moving the bars would have cost $400, so the Custodian settled for the best price obtainable in September 1944: $1,375.195 Of course, while this kind of property might end up being sold at steep discounts "or even at scrap value," personal property such as jewelry or art objects might sell for a third more than their appraised value. Total tangible personal property vested amounted to only $901,000 and sale proceeds by June 30, 1945 were only $452,000.196
Estates and Trusts. From March 11, 1942 until June 30, 1945, the Custodian vested a total of just under 2,997 estates and trusts, worth an estimated $41.1 million. Of that total, 77 percent (2,330) were formerly German-owned, constituting 82 percent ($33.6 million) of the total, the vast majority of which (1708) were decedents' estates and trusts under wills (541), valued at $11.7 and $17.1 million respectively197 . Distribution to the Custodian from executors, trustees, and fiduciaries of estates reached $13 million by mid-1945, nearly all of which came from decedents' estates and trusts under wills.
Heirs and executors could find themselves brought up short by the fact that the Custodian's determinations as to property and survivorship were conclusive. That was despite the fact that survivorship was "often extremely difficult, and not infrequently insoluble" to determine when a legatee was an enemy national resident in an enemy country.198 Complicating matters were contradictory legal rulings: some cases stated that lack of evidence of death created a presumption that heirs were still alive, while other cases declared the presumption did not exist owing to the ravages of war.199
Some securities from the distribution of estates and trusts or from the assets of vested enterprises were also vested, $1.8 million worth of stocks and $4.8 million worth of bonds, with more than three-fourths of the latter U.S. government bonds.200 As of June 30, 1945, the Custodian still held $871,000 worth of stocks and $2.2 million worth of bonds, indicating that $3 million realized from these sales was less than half of their total value of $6.6 million.201
The Postwar Period
Postwar Vesting and Return of Assets
Though Germany capitulated on May 8, 1945, the official cessation of hostilities was only proclaimed on December 31, 1946, and the official declaration of the end of the state of war between the United States and Germany came only on October 25, 1951. Thus, at least in formal terms, the "end" of the war was protracted, and the same can be said for the end of vesting. To be sure, Italian assets were no longer vested after December 1943, and as European countries were successively liberated in 1945, the property of nationals from these formerly enemy-occupied countries ceased being vested, though by that date the only property still being vested by the United States was patents and copyrights.
Yet in the later years of the war, the fear grew that
even though victorious, we shall still find large segments of our industry being controlled and manipulated from Berlin and we shall still be harboring within our borders a Nazi army ready to resume the task of boring from within in the hope of ultimately taking revenge for its previous defeat.202
Such an assessment may have lain behind the decision in the spring of 1945, when
the Secretary of State, the Secretary of the Treasury, and the Alien Property Custodian agreed that all property in the United States of hostile German and Japanese nationals should be vested in the Alien Property Custodian and that neither the property nor its proceeds should be returned to the former owners. Accordingly, they recommended to the President that the Custodian be authorized to vest German and Japanese bank accounts, credits, securities, and other properties not seized under the original vesting program. This recommendation was approved, and the expansion of the vesting program was authorized by Executive Order No. 9567 on June 8, 1945.203
Postwar vesting of German and Japanese-owned property would not be terminated until April 17, 1953, by which time an additional $210 million was paid into the Treasury from these seizures.204 In October of 1946, the APC was transformed into the Office of Alien Property (OAP) in the Justice Department, and over the next seven years, the pace of vesting increased substantially.205 In fact, 60 percent of all vesting orders issued from 1942 to 1953 were issued after 1946. The OAP relied on the FBI and the Treasury Department to investigate ownership before vesting, but with an important caveat.206 All property of German and Japanese citizens "is vested unless available evidence indicates they were victims of persecution by their governments."207 According to the Deputy Director of the OAP, the Office "took great pains to avoid vesting the property of such persons."208
Though postwar vesting was limited to property controlled by German and Japanese nationals, a reporting of assets similar to the wartime census was demanded of asset holders. Even among the first 12,000 reports received by October 1946, about one-third had to be set aside as they appeared to cover property that could not be vested.209 On the other hand, the continued concern over cloaked ownership prompted intensive investigations to locate as yet undisclosed property still controlled by enemies.210
By 1947, the OAP began returning vested property of all kinds--including cash, patents, interests in estates and trusts, copyrights, shares, and real property--to claimant individuals and businesses. By 1958, approximately 3,700 return orders had been issued, as compared to the over 5,000 vesting orders issued during the war and the over 14,000 issued thereafter.211 Thus, only about one in five of the properties seized were returned even by 1958, and of those 20 percent in turn, only about one in five were for properties vested during the war to judge by the related vesting order numbers. The names (and businesses) to whom property was returned give few clues as to identity or status, and even the precise nature of assets being returned is unclear: Return Order No. 4 gave back "certain patents" to "Arnold Janowitz and others" on February 24, and Return Order No. 2827 gave "securities" back to Leo Robinsohn on June 29, 1956, for example. A survey of the cash returned, some possibly from the sale of non-cash assets, shows a similar distribution as the 1941 Census of Foreign-Owned Assets, with quite small amounts (some as little as $3 or $10) returned to many individuals, and much larger amounts returned to a few companies and associations. Even the largest amounts were not that great. From 1947 to 1958, there were only 44 return orders, half of them to business enterprises, that were issued for amounts over $100,000. The largest single amount returned by the OAPC was the $4.7 million returned to "Wm. Kohler, Albert Arent, and Fidelity--Philadelphia Trust Co., sub-trustees of Trust of Dr. Otto Rohm" on June 30, 1954; the next largest amount was the $1.5 million given back to the Banco di Napoli in 1949. 212
Initially during World War I, the Custodian thought of himself as a trustee; later, he saw himself as the confiscator of enemy property. Confiscated property was sold, and the proceeds were meant to be used to satisfy U.S. war claims.213 The same line of thinking recurred during and after World War II with freezing (trusteeship) and vesting (confiscation) followed by the argument in Congress in 1946 that the proceeds from the sale of vested property should be used to pay for war damage claims of Americans. If foreign nationals had had their property seized abroad, then it was the responsibility of their governments to compensate them for those losses.214
In fact, the peace treaties signed with Italy, Bulgaria, Hungary, and Romania included explicit commitments by these countries to compensate their own nationals, but only Italy seems to have lived up to this obligation.215 A similar commitment formed part of the Paris Reparation Agreement on Germany (as well as subsequently directly with Germany in 1952), 216 and was confirmed for the United States as well through passage by Congress of the U.S. War Claims Act in 1948.
The difficulty even for those working in the OAP was the "unrealistic bookkeeping" such commitments meant. Though U.S. war claims after World War II were greater than those made after World War I, less property was seized and the costs to the United States of occupying Germany were much greater than the war claims of Americans against Germany.217 Thus it did make sense to use seized German-controlled property in the United States to settle American war claims against Germany while at the same time spending far larger dollar sums to occupy and rebuild Germany. It was a good deal more reasonable
to recognize that the damages done by Germany will never be made whole; that valid war claims will never be paid; that compensation for war claims to the extent that it is allowed must come, if at all, from the American Treasury; and to devote all German assets in this country directly to the relief of Germany, which we have, in fact, undertaken.218
But the "theory that the interests of American claimants for damage suffered at the hands of the Nazi government should have precedence" evidently took priority.219 Section 13 (a) of the 1948 War Claims Act created a War Claims Fund that was supervised by a Congressional War Claims Commission. That Commission saw its duty, according to an OAP critic, to "insure at all times the sufficiency of funds to provide for payment in full of every valid claim" possible under the Act.220 But since claims greatly outstripped available funds, the War Claims Commission in effect became "a pressure group with official status seeking the enlargement of the Fund."221 As the War Claims Act was financed through the Treasury Department, the argument in Congress was that proceeds from the sale of German and Japanese vested assets could be used to pay American war claims--an argument that drew objections from both the Justice Department and the Bureau of the Budget. For victims' heirs, the situation was even worse:
when the question is presented whether property of Jewish families exterminated at Auschwitz and Buchenwald should continue to be held by the United States, the issue should be determined in terms of the justification or lack of justification for the seizure and not in terms of the effect of a return on the [War Claims] fund.222
But in the end, by 1954, the War Claims Commission received $75 million in funding from seized German and Japanese assets, although evidently efforts had been made to insure that these would not, in fact, be assets of persecutees.
Though the category "victim" was not explicitly recognized as such until after the war, there was no evident intent to immobilize or seize assets from those who were clearly not enemies. Even in the midst of the war, John Foster Dulles could write that "nothing in the Congressional hearings or debates suggests any intention to confiscate the property of friendly aliens, neutrals, and the victims of German aggression" by the Alien Property Custodian.223 In addition, assets vested after the war were limited to those controlled by Germans and Japanese.
In practice, both the Treasury and Justice Departments tried to distinguish a small, problematic group that was trying to exploit assets or their access to the United States from the large, unproblematic group of those with legitimate control over assets in the United States or legitimate reasons for being here. The focus was on potential saboteurs, smugglers and the businesses that acted as fronts for Nazi interests as well as on the assets of innocents abroad that had been, or were in danger of being, captured by Nazi invaders. The Alien Registration Act, the freezing and then licensing of financial transactions by FFC, and the work of Customs inspectors were all directed towards preventing the enemy from exploiting assets in or access to the United States. The very existence of error-correction or asset-protection mechanisms, whether in the form of Treasury regulations and policies or the APC claims process, lend weight to Dulles's assertion. Other practices, ranging from the wartime reassertion of the legal rights of aliens to the exclusion of certain nationality groups from enemy alien restrictions, also testify to a desire to draw distinctions that would benefit those who had been persecuted. The evolution in thinking about persecutees would lead to an explicit recognition of their special status reflected in the August 1946 revision of the Trading with the Enemy Act (TWEA); when they were excluded from postwar vesting.
Foreign Funds Control tried to protect European property owners from Nazi actions, though for assets already in the United States in 1940 and 1941 the effect was limited by the very nature of the asset pool. Not only was that pool comparatively small and skewed towards assets controlled by nationals of Allied rather than Axis powers, but according to the 1941 census, these assets were distributed between a small group of businesses with large holdings and a large group of individuals with small holdings. That same distribution was also evident in the claims honored by the OAP after the war, with a few large sums returned to a few companies and many small sums (and other assets) returned to many individuals.
In fact, other than the looted Dutch securities that were eventually restituted to the Netherlands, FFC controlled assets were not vested by the APC.224 For individuals in the U.S. who needed funds during the war for living expenses, or for organizations working to help those trying to flee from Nazi-controlled territories, FFC tried to facilitate such access to assets. For bonafide refugees in the United States, FFC early on tried to ensure that their status not impinge upon access to their funds. After the war, the return of assets to the smallest property owners was eased by a unilateral decision to defrost accounts below a certain monetary threshold. Far less property was ever seized and sold than was temporarily frozen during the war, and even that freezing ensured oversight more than it absolutely prohibited transactions.
Assets vested by the APC during the war were explicitly to be used for the benefit of the United States, and when possible were converted into cash. While several hundred thousand individual assets were seized, in some of the largest categories such as patents and copyrights conversion into cash was impossible or impractical. Instead, their use was licensed, and the resulting revenues were paid to the custodian; in the postwar period, at least some of these copyrights and patents, along with the accumulated royalties, were returned to their former owners or their heirs, though with considerable delay. Claims could be and were filed for the return of vested assets, though the number of successful claims (under 4,000 by 1958) appears to have been much smaller than the number of properties seized (19,312 vesting orders).225 Gold presented its own difficulties because of its liquidity, the manner in which the world gold trade functioned, and the difficulty of tracing ownership and a U.S. "unquestioned acceptance" policy. Some of the "flight capital" arriving in the United States before the war, including gold, undoubtedly came from victims, but available information permits no definitive conclusions about what proportion that might have been.
Endnotes for Chapter 3
1 Treas. Dept., FFC, "Administration of the Wartime Financial and Property Controls of the United States Government," June 1942, 1 - 3, NACP, RG 131, FFC Subj. Files, Box 367, Rpts TFR-300 [311908-963] (hereinafter "Administration of Wartime Controls"); "General Information on the Administration, Structure and Functions of Foreign Funds Control, 1940 - 1948," Ch. 1, 1 (hereinafter "History of FFC"), NACP, RG 56, Entry 66A816, Box 47 [331331 - 775].
2 Exec. Order 8389 (Apr. 10, 1940), Sec. 2, in Martin Domke, Trading with the Enemy in World War II (New York: Central Book Co., 1943), 432 - 3.
3 Domke, Trading with the Enemy in World War II, 391, 438. Congress amended Sec. 5 (b) on May 7, 1940, as did the First War Powers Act of Dec. 18, 1941.
4 "History of FFC," Ch. 2, 3 fn. 6.
5 Ibid., Ch. 2, 1 - 4. The memos were sent on Dec. 15 and 22, 1937; Oliphant drafted extensive regulations for such controls.
6 Ibid., Ch. 2, 8.
7 Ibid., Ch. 3, 2.
8 3 Code of Federal Regulations (CFR), 1938 - 1943 Compilation. Exec. Orders 8405 (657 - 659), E.O. 8446 (674), E.O. 8484 (687), and E.O. 8565 (796).
9 CFR, 1938 - 1943 Compilation. E.O. 8701 (904), E.O. 8711 (910), E.O. 8721 (917), and E.O. 8746 (929).
10 By October 1940, both agencies agreed to examine how German funds located in the Western Hemisphere were being used. Francis Biddle, Solicitor General of the Department of Justice, asked J. Edgar Hoover, the Director of the FBI, to provide whatever information he had on such funds "being used for propaganda in this country and South America." See Memo from Francis Biddle, Solicitor General, Dept. of Justice, to J. Edgar Hoover, Oct. 15, 1940, FBI Files .
11 Dean Acheson, Present at the Creation (New York: Norton, 1969), 23. The State Department was particularly concerned about the effect a communications freeze with enemy countries would have: "it was imperative that no action be taken by this government which might invite retaliatory measures against our [diplomatic] pouch." "History of FFC," Ch. 5, 101A [331331 - 775].
12 Freezing orders were also extended to China and Japan (July 26), Thailand (Dec. 9), and Hong Kong (Dec. 26), retroactive to June 14, 1941. William Reeves, "The Control of Foreign Funds by the United States Treasury," Law and Contemporary Problems 11 (1945), 24; "Administration of Wartime Controls," 2 [331908 - 963]; Domke, Trading with the Enemy in World War II, 434.
13 Starting on May 10, 1940, the Treas. Dept., had required that assets in the U.S. belonging to blocked countries or their nationals be reported (on Form TFR - 100) as countries fell successively under the freezing order. Treas. Dept., Annual Report 1940, 543.
14 "History of FFC," Ch. 4, 19 - 20.
15 Memo from Elting Arnold, "Divulging Information from Form TFR-300 Reports to Foreign Countries and American Creditors of Nationals of Such Countries," Oct. 27, 1944, NACP, RG 131, FFC, Box 367, TFR-300 Release of Info [312003 - 009].
16 U.S. Treas. Dept., Census of Foreign-Owned Assets in the United States (Washington: Government Printing Office, 1945), 8 - 9, 55 - 57 (hereinafter Census of Foreign-Owned Assets).
17 See Judd Polk, "Freezing Dollars Against the Axis," Foreign Affairs 20 (1): Oct. 1941, 114. Polk's estimates differed significantly from the Census even thought his information came from Treasury Dept. compilations and other government sources.
18 "History of FFC," Ch. 4, 22.
19 Census of Foreign-Owned Assets, 14 - 15.
20 It is worth noting in this context how comparatively small American investment in the neutral countries was. According to a 1942 American Economic Review article, American investment was largest in Spain ($86 million), followed by Sweden ($28 million), Portugal ($17 million) and Switzerland ($12 million). Cited in Edward Freutel, "Exchange Control, Freezing Orders and the Conflict of Laws," Harvard Law Review 56 (1942), 63.
21 Census of Foreign-Owned Assets, 62.
22 Ibid., 17 - 19.
23 Ibid., 20, 66, 68.
24 Ibid., 62, 75. The market value of foreign securities held in the U.S. by banks, brokers, and custodians for Danes and Norwegians was higher, at $23.5 million.
25 This was from an unpublished Commerce Dept. estimate of 1940, reproduced as Table VI in the Census of Foreign-Owned Assets, 21.
26 "Administration of Wartime Controls," 21 [311908 - 963].
27 "History of FFC," Ch. 3, 28 - 29.
28 "Administration of Wartime Controls," 21 [311908 - 963]; "History of FFC," Ch. 3, 29 [331331 - 775].
29 Treas. Dept., Annual Report 1944, 215 - 16; Treas. Dept., Annual Report 1945, 200; Reeves, "Control of Foreign Funds," 42.
30 "History of FFC," Ch. 3, 30; "Administration of Wartime Controls," 20 [311908 - 963]; Reeves, "Control of Foreign Funds," 42.
31 "Administration of Wartime Controls," 22 [311908 - 963].
32 Following Public Circular 6, as of September 13, 1941, such securities could not be redeemed unless a Form TFEL-2 was attached.
33 Reeves, "Control of Foreign Funds," 43.
34 Treas. Dept., Annual Report 1943, 126.
35 Ruling 6A was revoked again in September 1943, and control of securities subsumed under General Ruling 5, with similar restrictions.
36 "History of FFC," Ch. 5, 13 - 15. Reeves, "Control of Foreign Funds," 44. Reeves also asserts that the Treasury "did everything practical to depreciate the value of the American dollar bill in Europe and elsewhere."
37 "History of FFC," Ch. 5, 14. Currency import controls were lifted in April 1947 since by that time most important European countries in Europe "had taken steps to detect and segregate any US currency within their borders in which there was an enemy interest." Ibid., Ch. 6, 47.
38 Ibid., Ch. 5, 22.
39 Ibid., Ch. 5, 22 - 23.
40 Ibid., Ch. 5, 22 - 25. This ruling was revoked again on August 19, 1947. Ibid., Ch. 6, 47.
41 Ibid., Ch. 5, 24 - 25.
42 Barry Eichengreen, Golden Fetters: The Gold Standard and the Great Depression, 1919 - 1939 (New York: Oxford Univ. Press, 1992), 194.
43 "The 40 percent [gold cover] ratio was viewed as a critical threshold below which public confidence in convertibility would be threatened." Eichengreen, Golden Fetters: The Gold Standard and the Great Depression, 1919 - 1939, 116 n. 47. In the late 1920s, the gold cover ratios in other nations ranged from 33 percent in Albania to 75 percent in Belgium, Poland, and Germany. Barry Eichengreen, Elusive Stability (New York: Cambridge Univ. Press, 1990), 248.
44 48 Stat. 337, Sect. 8; the earlier wording comes from Sect. 2. See U.S. Statutes at Large, 73rd Congress, 2nd Session, Jan. 30, 1934, 337, 341. A license to take such actions with respect to gold was granted to the Federal Reserve Bank of New York by the Treasury Department on March 24, 1937. U.S. Treas. Dept., Spec. Form TGL-18, License No. NY-18-1, "License to Transport, Import, Melt and Treat, Export, Earmark and Hold in Custody for Foreign or Domestic Account," Mar. 24, 1937, Princeton Univ., Seely Mudd Lib. Harry Dexter White Collection, Box 3, File 82 [223773-774].
45 Milton Friedman & Anna Schwartz, A Monetary History of the United States 1867 - 1960 (Princeton: Princeton Univ. Press, 1963), 470 - 71.
46 Though gold continued to come in under anti-hoarding provisions during the next 17 years, in no year was the amount greater than $240,000. U.S. Treas. Dept., "Material in Reply to Questions from Senator Knowland," no date [ca. April 1952], NACP, RG 56, Entry 69A7584, Box 4, Congressional and other Inquiries .
47 Friedman & Schwartz, A Monetary History of the United States 1867 - 1960, 473.
48 G.A. Eddy, "The Gold Policy of the United States Treasury," 5th Draft, Jan. 7, 1949, NACP, RG 56, Entry 69A7584, Box 4, Congressional and other inquiries . The chief of the Balance of Payments Division of the Federal Reserve Bank of New York later argued that the transfer of foreign dollar assets worth $3.4 billion in the 1933 to 1940 era "reflected essentially 'autonomous' private transfers of 'hot money' from Europe and was accompanied, and in fact made possible, by large gold exports to this country." Fred Klopstock, "The International Status of the Dollar," Essays in International Finance 28 (May 1957), 7.
49 Griffeth Johnson, The Treasury and Monetary Policy 1933 - 1938 (Cambridge: Harvard University Press, 1939), 54. Friedman & Schwartz, A Monetary History of the United States 1867 - 1960, 509.
50 Johnson, The Treasury and Monetary Policy 1933 - 1938, 154. $297 million of the $3 billion total came from the Netherlands, the next largest source. Memo from Mr. White to Secy. Morgenthau, "Gold Imports in the United States," May 9, 1939, NACP, RG 56, Entry 67A1804, Box 50, Divisional Memo #2, [202539, 545].
51 Eddy, "The Gold Policy of the United States Treasury," Jan. 7, 1949, NACP, RG 56, Entry 69A7584, Box 4, Congressional and other inquiries . This 1949 total is of the same order of magnitude as the $14.47 billion noted in 1952 (see table). Likewise, though there are minor discrepancies in the figures for yearly gold imports, several sources agree that the yearly gold flows to the U.S. in the 1930s regularly amounted to between $1 and $2 billion worth. Eichengreen, Golden Fetters: The Gold Standard and the Great Depression, 1919 - 1939, 346, 353; Johnson, The Treasury and Monetary Policy 1933 - 1938, 56.
52 This table is adapted from U.S. Treas. Dept., "Material in Reply to Questions from Senator Knowland," no date [ca. April 1952], NACP, RG 56, Entry 69A7584, Box 4, Congressional and other Inquiries . Senator Knowland sat on the Senate Appropriations Committee, and inquired of Treasury on April 14, 1952; this reply was prepared by early May .
53 From Table "Gold Movement Between the United States and the Axis Powers, 1934 - 1941," NACP, RG 56, Entry 67A1804, Box 50, Div. Memo #4 , probably prepared by Mr. Bernstein for the Stabilization Fund Hearings in June 1941 .
54 Treas. Dept., Div. of Monetary Research, "Net Movement of Gold to the United States by Countries, 1940," NACP, RG 56, Entry 67A1804, Box 50, Div. Memo #4 , probably prepared by Mr. Bernstein for the Stabilization Fund Hearings in June 1941 .
55 Memo from Mr. White to Secy. Morgenthau, "Gold Imports in the United States," May 9, 1939, NACP, RG 56, Entry 67A1804, Box 50, Divisional Memo #2, [202539, 545]. An earlier (March 17, 1938), unsigned, draft memo entitled "Merits of a Proposal to Place an Embargo on Gold Imports" exists in these files, and concludes that the disadvantages of an embargo greatly outweighed the advantages; gold inflows could be reduced by other means if so desired. NACP, RG 56, Entry 67A1804, Box 50, Division Memoranda #1[202515-520].
56 H. D. White, "The Future of Gold," corrected copy, Dec. 12, 1940, Princeton Univ., Seely Mudd Manuscript Lib., Harry Dexter White Collection, Box 4, Future of Gold (Part I, Section III), Folder # 10 [223821-835].
57 Memo from Mr. White to Secy. Morgenthau, "Gold Imports in the United States," May 9, 1939, NACP, RG 56, Entry 67A1804, Box 50, Divisional Memo #2, [202539, 545].
58 Note handed by Mr. Pinsent, Financial Counselor to Brit. Embassy, to Mr. Cochran in the Treasury at 7 P.M., May 27, 1940, NACP, RG 56, Entry 67A1804, Box 49, Discrimination (U.S.) . Pinsent estimated the worth of the gold at 50 - 100 million pounds.
59 Memo from Mr. White to Messrs. D.W. Bell, Cochran & Foley, June 4, 1940, NACP, RG 56, Entry 67A1804, Box 49, Discrimination (U.S.) . Though White does not explicitly say so, this assertion was most likely due to the practice among gold trading nations to resmelt, recast, and stamp bars of gold with national marks, thereby hindering the tracing of the previous origin.
60 Ibid. [202512-513].
61 H. D. White, "The Future of Gold," corrected copy, Dec. 12, 1940, Princeton Univ., Seely Mudd Manuscript Lib., Harry Dexter White Collection, Box 4, Future of Gold (Part I, Section III), Folder # 10 [223821-835].
62 Memo from Ms. Kistler to Mr. White, "Whose gold are we buying?" Feb. 13, 1941, NACP, RG 56, Entry 67A1804, Box 49, Acquisitions [202388-202389].
63 "History of FFC," Ch. 3, 14 - 17, under Gen. Licenses 32 (Aug. 1940) & 33 (Sept. 1940).
64 Reeves, "The Control of Foreign Funds by the United States Treasury," 38 - 41.
65 "Administration of Wartime Controls," 12 [311908-963].
66 The figures permitting this calculation may be found in Treas. Dept., Annual Report 1941, 219; 1942, 56; 1943, 125; 1944, 127; 1945, 206. In 1946, 112,000 applications were filed (1947: 54,000; 1948: 15,000), but no approval percentages are given. See Treas. Dept., Annual Reports 1946, 200; 1947, 123; 1948, 138.
67 Treas. Dept., Annual Report 1945, 423.
68 Reeves, "The Control of Foreign Funds by the United States Treasury," 52 - 53; "Administration of Wartime Controls," 19 [311908-963].
69 "History of FFC," Ch. 5, 82. This was "probably the most common device employed for large holdings."
70 "Administration of Wartime Controls," 4, 32, 37 [311908-963]; Reeves, "The Control of Foreign Funds by the United States Treasury," 54.
71 "Administration of Wartime Controls," 34 - 35 [311908-963]; "History of FFC," Ch. 5, 86. These sources do not make clear whether this is the number liquidated only by FFC, whether this represents liquidations by both FFC and the APC, or whether this included Japanese-controlled companies as well.
72 Mitchell Carroll, "Legislation on Treatment of Enemy Property," American Journal of International Law 37 (1943), 628.
73 Domke, Trading with the Enemy in World War II, 63.
74 Donald Perry, "Aliens in the United States," The Annals, v. 223 (September 1942), 5 - 7. The registration was mandated by the President on June 28, 1940. "Europe" in this list encompassed the Baltics, Russia, Eastern Europe and Turkey, as well as Austria-Hungary. No more than 1.5 million aliens (30 percent of 4.9 million) would have arrived after 1924, of whom no more than 1 million (30 percent of 3.57 million) came from "Europe."
75 That is, 25 percent of the Germans, 42 percent of the Italians, and 37 percent of the Japanese were aliens. Charles Gordon, "Status of Enemy Nationals in the United States," Lawyers Guild Review 2 (1942), 9, citing Att. Gen. Francis Biddle. See also "Alien Enemies and Japanese-Americans: A Problem of Wartime Controls," Yale Law Journal 51 (1942), 1317.
76 Maurice Davie, "Immigrants from Axis-Conquered Countries," The Annals, v. 223 (September 1942), 114. The countries were France, Belgium, Netherlands, Denmark, Norway, Greece, Poland, Czechoslovakia and Yugoslavia. Thus one reading is that most of the foreign-born had arrived long before 1940 and were already naturalized (though at least some may have never taken out U.S. citizenship); another reading is that for a given European immigrant group, around one-third in 1940 were registered aliens (among whom varying percentages would have been refugees).
77 George Warren, "The Refugee and the War," The Annals, v. 223 (September 1942), 92. This figure was "based on Jewish immigration from Europe since 1933 and statistics of other admissions from European countries in which refugees have originated or through which they have passed."
78 Domke, Trading with the Enemy in World War II, 39, 436. The freezing order definition of "national" extended to partnerships, associations, and corporation that had their principal place of business in such foreign country" or were controlled by such foreign country and/or one or more nationals. Also Reeves, "Control of Foreign Funds," 33 - 34; "New Administrative Definitions of 'Enemy' to Supersede the Trading with the Enemy Act," Yale Law Journal 51 (1942), 1392 - 93.
79 According to the Treasury Department on May 8, 1943. Martin Domke, The Control of Alien Property, (New York: Central Book Company, 1947), 291. Thus, though "many refugees are stateless, statelessness is not the essential quality of a refugee." Jane Carey, "Some Aspects of Statelessness since World War I," American Journal of International Law 40 (1946), 113.
80 Domke, Trading with the Enemy in World War II, 437.
81 So persons not in designated enemy countries are not deemed nationals of a designated enemy country unless the Alien Property Custodian determined such a person is controlled by or acting for on behalf of a designated enemy country or person. Otto Sommerich, "Recent Innovations in Legal and Regulatory Concepts as to the Alien and his Property," American Journal of International Law 37 (1943), 65. See also Domke, Trading with the Enemy in World War II, 44. This Executive Order delimited the powers of the Alien Property Custodian.
82 The stateless who were still resident in blocked countries were "nationals" as defined by the freezing order, while the stateless who were resident in the U.S. after June 17, 1940, were generally licensed nationals. The difficulties arose for refugees in transit, because though they might intend to never return, technically their domicile might still be in a blocked country. Arthur Bloch & Werner Rosenberg, "Current Problems of Freezing Control," Fordham Law Review 11 (1942), 75.
83 "History of FFC," Chapter 4, 30 - 33. The New York law firm Topken & Farley, specializing in German business, was blocked in this "ad hoc" manner. Clearly ad hoc blocking also implied ad hoc mitigating circumstances for long-term U.S. residents who happened not to be U.S. citizens.
84 Ibid., Chapter 5, 7 - 8.
85 Rudolf Littauer, "Confiscation of the Property of Technical Enemies," Yale Law Journal 52 (1943), 741.
86 Frank Sterck & Carl Schuck, "The Right of Resident Alien Enemies to Sue," Georgetown Law Journal 30 (1942), 433. The legal complications created by this Act are well summarized in Michael Brandon, "Legal Control over Resident Enemy Aliens in Time of War in the United States and in the United Kingdom," American Journal of International Law 44 (1950), 382 - 87.
87 Domke, Trading with the Enemy in World War II, 63 - 64; Sterck & Schuck, "The Right of Resident Alien Enemies to Sue," 434.
88 "New Administrative Definitions of 'Enemy' to Supersede the Trading with the Enemy Act," 1388. By the same token, "citizens domiciled in enemy territory are regarded as enemies." Leon Yudkin & Richard Caro, "New Concepts of 'Enemy' in the 'Trading with the Enemy Act'," St. John's Law Review 18 (1943), 58.
89 The proclamations were issued on December 7, 1941, for Japanese (No. 2525, 6 F.R. 6321), and on December 8, 1941, for German (No. 2526, 6 F.R. 6323) and Italian (Proclamation No. 2527, 6 F.R. 6324) alien enemies. Travel and identification regulations were issued on February 5, 1942, and January 22, 1942, respectively. Gordon, "Status of Enemy Nationals in the United States," 12 - 13; Domke, Trading with the Enemy in World War II, 68; "Alien Enemies and Japanese-Americans: A Problem of Wartime Controls," 1319 - 1321.
90 Earl Harrison, "Alien Enemies," 13 Penn Bar Association Quarterly 196, cited in Gordon, "Status of Enemy Nationals in the United States," 10 - 11.
91 E.g., "many 'enemy aliens' are actually here because they are friendly," Clyde Eagleton, "Friendly Aliens," American Journal of International Law 36 (1942), 662.
92 "Alien Enemies and Japanese-Americans: A Problem of Wartime Controls," 1337; Domke, Trading with the Enemy in World War II, 29, 47.
93 Robert Wilson, "Treatment of Civilian Alien Enemies," American Journal of International Law 37 (1943), 41; "Alien Enemies and Japanese-Americans: A Problem of Wartime Controls," 1321; E. von Hofmannsthal, "Austro-Hungarians," American Journal of International Law 36 (1942), 292 - 294.
94 Sommerich, "Recent Innovations in Legal and Regulatory Concepts as to the Alien and his Property," 60; Domke, The Control of Alien Property, 47.
95 Gordon, "Status of Enemy Nationals in the United States," 11; "Civil Rights of Enemy Aliens During World War II," Temple University Law Quarterly 17 (1942), 87.
96 The quotes are from Department of Justice press releases on December 9 and December 14, 1941, respectively, cited in Sterck and Schuck, "The Right of Resident Alien Enemies to Sue," 421.
97 Ibid., 423.
98 32 N.Y.S. (2d) 450, 177 Misc. 939 (Jan. 21, 1942).
99 Sterck and Schuck, "The Right of Resident Alien Enemies to Sue," 431 - 432, 436. "Civil Rights of Enemy Aliens During World War II," 91 states that in this case "the alien enemy was considered an alien friend, rather than an enemy." Nonresident aliens did not have this right to sue.
100 As with citizens, the outcome was uncertain for aliens. "Even if he can prove that he is not an enemy," for example in pursuing a claim against the APC, he "need not necessarily succeed in obtaining possession of the property" if the court deemed otherwise. See Rudolf Littauer, "Confiscation of the Property of Technical Enemies," Yale Law Journal 52 (1943), 769; "Civil Rights of Enemy Aliens During World War II," 91.
101 Confiscation by the U.S. government was practiced on an unprecedented scale during World War II (in 1941 and 1942, the War Department took complete control of nearly ten million acres of land), and was legitimated by the legislative mandates provided by the Export Control Requisition Act of Oct. 10, 1940 and the Requisitioning Act of October 16, 1941, among other laws. For an extended discussion, see Paul Marcus, "The Taking and Destruction of Property Under a Defense and War Program," Cornell Law Quarterly 27 (1942), 317 - 346, 476 - 533.
102 Russian Volunteer Fleet v. United States, 282 U.S. 481, (1931) that "the petitioner was an alien friend and as such was entitled to the protection of the Fifth Amendment," and this ruling was subsequently upheld in Becker Steel Co. v. Cummings 296 U.S. 74 (1935). See Littauer, "Confiscation of the Property of Technical Enemies," 760; also "Former Enemies may sue in Court of Claims to Recover Value of Property Unlawfully Vested by Alien Property Custodian," University of Pennsylvania Law Review 106 (1958), 1059.
103 "Friendly Alien's Right to Sue for Return of Property Seized by Alien Property Custodian," Yale Law Journal 56 (1947), 1068 - 76; "Remedy Available to Alien Friend whose Property has been 'Vested' by Alien Property Custodian," Columbia Law Review 47 (1947), 1052 - 61.
104 "Return of Property Seized during World War II: Judicial and Administrative Proceedings under the Trading with the Enemy Act," Yale Law Journal 62 (1953), 1210 - 35.
105 In fact, licenses under the freezing order as well as the rights of the APC were "irrelevant," since the TWEA deals with "disabilities in respect of the privileges of trade" and they had "no connection with disabilities in respect of the ownership of lands." Lawrence Pratt, "Present Alienage Disabilities under New York State Law in Real Property," Brooklyn Law Review 12 (1942), 3.
106 Ibid., 3.
107 Sterck and Schuck, "The Right of Resident Alien Enemies to Sue," 422; for a discussion of this case, see Domke, Trading with the Enemy in World War II, 79 - 82.
108 Pratt, "Present Alienage Disabilities under New York State Law in Real Property," 2.
109 Ibid., 12 - 13. The catch was that aliens had no "heritable blood" in common law and therefore could not transmit by descent.
110 Ibid., 14; Sect. 60 - 70 of the Public Lands Law (N.Y. Laws 1909, c. 46).
111 Pratt, "Present Alienage Disabilities under New York State Law in Real Property," 2; Domke, Trading with the Enemy in World War II, 96.
112 Domke, The Control of Alien Property, 61 - 62. New Jersey had previously passed similar legislation (on April 8, 1943) removing the disabilities imposed upon resident aliens classified as "enemies."
113 Doris Banta, "Alien enemies: Right to acquire, hold, and transmit real property: Recent change in New York Real Property Law abolishing all disabilities," Cornell Law Quarterly 30 (1944), 242.
114 Ibid., 238. Banta further notes that even at the time, 19 states made no distinctions between aliens and citizens in their right to transfer property at death.
115 William Butler, "Proving Foreign Documents in New York," Fordham Law Review 18 (1949), 49 - 71.
116 "History of FFC," Ch. 5, 107.
117 Ibid., 108.
118 Ibid., 109.
119 Ibid., 109 - 10. The President established the War Refugee Board by Executive Order at about the same time, a body dedicated to helping rescue persons in occupied territories who were in imminent danger of death. The Board's operations were mostly privately funded and required an FFC license: about $20 million in private funds was transferred abroad for private rescue and relief projects under this license, aided by a U.S. government appropriation of about $2 million for operations.
120 John Foster Dulles, "The Vesting Powers of the Alien Property Custodian," Cornell Law Quarterly 28 (1943), 246.
121 Kenneth Carlston, "Foreign Funds Control and the Alien Property Custodian," Cornell Law Quarterly 31 (1945), 5 - 6.
122 Francis Fallon, "Enemy Business Enterprises and the Alien Property Custodian, I," Fordham Law Review 15 (1946), 223 - 24.
123 Stuart Weiss, The President's Man: Leo Crowley and Franklin Roosevelt in Peace and War (Carbondale: Southern Illinois Univ. Press, 1996), 117, 127. Weiss implies the scandal involved patronage and a lack of transparency in the Custodian's actions.
124 This Division was transferred, with all its property and personnel, to the APC on Apr. 21, 1942, by Exec. Order 9142. Office of Alien Property Custodian, Annual Report, [hereinafter APC Annual Report] 1944, 2 . See also Domke, Trading with the Enemy in World War II, 264.
125 The best overview of the office and the assets under its control is Paul Myron, "The Work of the Alien Property Custodian," Law and Contemporary Problems 11 (1945), 76 - 91. Myron was Chief of the Estates and Trusts Sec. in the office during 1942 and 1943, then worked as Assistant to the Custodian in 1944 and 1945.
126 APC Annual Report, 1942 - 43, 26 - 27 [322726-808].
127 A. M. Werner,"The Alien Property Custodian," Wisconsin State Bar Association Bulletin 16 (1943), 15.
128 APC Annual Report, 1943 - 43, 19 - 20 [322726-808].
129 Werner, "The Alien Property Custodian," 15. This was meant as a means to protect due process. See Kenneth Woodward, "Meaning of 'Enemy' Under the Trading with the Enemy Act," Texas Law Review 20 (1942), 753.
130 Dulles, "The Vesting Powers of the Alien Property Custodian," 259.
131 APC Annual Report, 1944, V [322809-953].
132 Sommerich, "Recent Innovations in Legal and Regulatory Concepts as to the Alien and his Property," 68.
133 Carlston, "Foreign Funds Control and the Alien Property Custodian," 22.
134 Specimen copies of vesting orders from 1942 and 1943 can be found in Sommerich, "Recent Innovations in Legal and Regulatory Concepts as to the Alien and his Property," 67, and in Domke, Trading with the Enemy in World War II, 467 - 68.
135 Carlston, "Foreign Funds Control and the Alien Property Custodian," 7.
136 Dulles, "The Vesting Powers of the Alien Property Custodian," 257. The case referred to is Swiss Insurance Company.
137 Carlston, "Foreign Funds Control and the Alien Property Custodian," 8.
138 The following is from the discussions in APC Annual Report, 1942 - 43, 4, 68 [322726-808]; APC Annual Report, 1944, 4 [322809-953].
139 APC Annual Report, 1942 - 43, 69 [322726-808].
140 Ibid., 69 - 70 [322726-808].
141 APC Annual Report, 1944, 136 [322809-953].
142 APC Annual Report, 1945, 19 [322954-3090]. Assets were greater than equity because creditors other than the Custodian held interests in the companies, but also because total value included supervised property in which the Custodian had no equity.
143 Ibid., 1945, 17 [322954-3090].
144 Fritz Machlup, "Patents," International Encyclopedia of the Social Sciences (New York: Macmillan, 1968), v. 11, 461.
145 APC Annual Report, 1944, 109 [322809-953].
146 Wallace McClure, "Copyright in War and Peace," American Journal of International Law 36 (1942), 386.
147 Domke, Trading with the Enemy in World War II, 280.
148 APC Annual Report, 1944, 109 - 10 [322809-953].
149 APC Annual Report, 1945, 121 [322954-3090]; APC Annual Report, 1942 - 43, 39 [322726-808].
150 APC Annual Report, 1945, 132 - 33 [322954-3090].
151 For a sample list of titles, see the APC Annual Report, 1942 - 43, 64 - 65 [322726-808]. Given the fate of Jewish intellectuals employed in German universities before the war, it is likely that some of the royalties from such works belonged to victims.
152 APC Annual Report, 1945, 122 [322954-3090].
153 Ibid., 123 - 24 [322954-3090].
154 APC Annual Report, 1942 - 43, 64 [322726-808]; APC Annual Report, 1945, 126 - 27 [322954-3090].
155 Fed. Reg., Nov. 4, 1950, 7459 - 60.
156 APC Annual Report, 1945, 125 [322954-3090] for the royalty amount; the quote is from APC Annual Report, 1944, 113 [322809-953].
157 "Administration of Wartime Controls," 31 [311908-963]; Domke, Trading with the Enemy in World War II, 291 - 2.
158 Werner, "The Alien Property Custodian," 17.
159 ACP Annual Report, 1944, 115 [322809-953].
160 Domke, The Control of Alien Property, 190 - 191.
161 APC Annual Report, 1944, 116 [322809-953].
162 APC Annual Report, 1945, 137 [322954-3090].
163 Guenther Reimann, Patents for Hitler (London: Victor Gollancz, 1945), 137.
164 U.S. Senate, Committee on Military Affairs, Subcommittee on War Mobilization, Scientific & Technical Mobilization, Hearings, 78th Congress, 1st Session, 1943. A 1942 investigation by the Committee on Patents under Homer T. Bone had heard similar testimony. Mira Wilkins, The Maturing of Multinational Enterprise: American Business Abroad from 1914 to 1970 (Cambridge: Harvard Univ. Press, 1974), 263.
165 Wilkins, The Maturing of Multinational Enterprise, 79, 81. Wilkins (258) cites an April 18, 1941, cable from a Du Pont vice president to I.G. Farben, "suggesting that in view of government restrictions our two companies mutually agree discontinue exchange technical information patent applications etc on all existing contracts," which was a response to a Presidential proclamation of April 15, 1941, that required technical information exports to be licensed.
166 E.g., "Few have appreciated until recently the extent to which enemy controlled patents had been used even before the war in a systematic conspiracy to throttle American war production." Werner, "The Alien Property Custodian," 17.
167 Senate Committee on Patents, testimony given on Apr. 27, 1942, cited in Howland Sargeant & Henrietta Creamer, "Enemy Patents," Law and Contemporary Problems 11 (1945), 101.
168 Cited in Carlston, "Foreign Funds Control and the Alien Property Custodian," 19.
169 APC Annual Report, 1942 - 43, 27 - 28 [322726-808].
170 Ibid., 40 - 45 [322726-808].
171 Of the 44,796 vested patents (the number given in an article written by the Chief of the Division of Patent Administration in the Office of APC), half were for the machinery, chemical, automotive, electrical and radio industries. Sargeant & Creamer, "Enemy Patents," 107 - 08.
172 APC Annual Report, 1945, 97, 99, 100 [322954-3090].
173 Reeves, "The Control of Foreign Funds by the United States Treasury," 55.
174 Carlston, "Foreign Funds Control and the Alien Property Custodian," 19 - 20; Sargeant & Creamer, "Enemy Patents," 96 - 105.
175 APC Annual Report, 1945, 101 [322954-3090].
176 Ibid., 102 [322954-3090].
177 Edwin Borchard, "Nationalization of Enemy Patents," American Journal of International Law 37 (1943), 92 - 97.
178 APC Annual Report, 1945, 105 - 108 [322954-3090].
179 Ibid., 107 - 108 [322954-3090].
180 APC Annual Report, 1942 - 43, 62 - 3 [322726-808]. Particular mention is also made of the patent for the synthetic antimalarial drug Atabrine, as "of vital importance to the successful prosecution of the war" after the Dutch East Indies--which produced 96 percent of the world supply of quinine--had fallen to the Japanese.
181 APC Annual Report, 1945, 110 [322954-3090].
182 The quote is from A. M. Werner, Gen. Counsel for the APC in early 1943. Werner, "The Alien Property Custodian," 17 - 18.
183 APC Annual Report, 1945, 119 [322954-3090].
184 Ibid., 21 - 22, 25 [322954-3090].
185 Ibid., 31 [322954-3090]. For a very thorough survey of the control of enterprises by the APC, written by a former Assistant Gen. Counsel in the Office of APC, see Francis Fallon, "Enemy Business Enterprises and the Alien Property Custodian, I" Fordham Law Review 15 (1946), 222 - 247; Francis Fallon, "Enemy Business Enterprises and the Alien Property Custodian, II" Fordham Law Review 16 (1947), 55 - 85.
186 APC Annual Report, 1945, 33, 36 - 37 [322954-3090]. The remaining six were owned by persons of "other" nationality.
187 Ibid., 33 [322954-3090]. Types of enterprises included corporations (291), partnerships (27), proprietorships (23), nonprofit organizations (12), U.S. branches of foreign enterprises (52), and miscellaneous associations (3).
188 Ibid., 36 [322954-3090]; APC Annual Report, 1942 - 43, 37 [322726-808]. Most of the banking firms that were vested were Japanese.
189 APC Annual Report, 1945, 42 - 43 [322954-3090].
190 Ibid., 49 - 54 [322954-3090]. There were high levels of insolvency (at least 81 companies), as well as uncertainties about liability in the case of sole proprietorships or unincorporated branches of foreign enterprises (77 companies), making it difficult to realize gain from their sale. Assets in foreign countries, and the difficulty of selling property for which there was no ready market provided further hindrances. Thus, of the vested enterprises on June 30, 1945, 30 were closed banks and insurance companies, 34 were in liquidation, 55 companies operated at a profit, but 172 operated at a loss: the net loss was thus $1.4 million (54). The sale of a few (largely Japanese) vested banks was marginally profitable.
191 Ibid., 139 [322954-3090].
192 Ibid., 140 [322954-3090]. That is, vested properties clustered along the New York (90), Pennsylvania (54), New Jersey (23), Maryland (16), and Washington D.C. (27) corridor, but Missouri (61) and Texas (34) were also well represented, likely owing to their relatively large German immigrant populations. California (98) and Hawaii (73) properties were largely Japanese-owned.
193 APC Annual Report, 1945, 141 [322954-3090].
194 APC Annual Report, 1942 - 43, 31 [322726-808].
195 APC Annual Report, 1945, 145 [322954-3090].
196 Ibid., 146 [322954-3090]. To judge by the disparities, the most difficult to sell were industrial machinery, equipment and materials.
197 Ibid., 157 [322954-3090].
198 Merlin Staring, "The Alien Property Custodian and Conclusive Determinations of Survivorship," Georgetown Law Journal 35 (1947), 264 - 265.
199 APC Annual Report, 1945, 176 [322954-3090].
200 APC Annual Report, 1944, 125 [322809-953]; APC Annual Report, 1945, 147 [322954-3090].
201 APC Annual Report, 1945, 149 [322954-3090].
202 John Dickinson, "Enemy-Owned Property: Restitution or Confiscation?" Foreign Affairs 22 (1943), 138 - 9. Leo Pasvolsky, an advisor in the State Department during the war, put it succinctly in 1942: "[w]e must make sure that the cessation of armed hostilities will not be followed by a continuation of economic warfare."
203 Terminal Rpt., Office of APC, Oct. 1946, 3 - 4 [106992-7020].
204 This sum represents mingled German and Japanese property, and claims were paid out of it to those who were neither German or Japanese, such as to Italian POWs. William Reeves, "Is Confiscation of Enemy Assets in the National Interest of the United States?" Virginia Law Review 40 (1954), 1046.
205 In fact, by October of 1946, the APC vested four times as much as it had in the nine months preceding, and doubled the amount of cash it realized from sales and liquidation. However, the property of Hungarian, Romanian, and Bulgarian nationals was only still vested if the property had been acquired in the United States before December 7, 1945. Terminal Rpt., Office of APC, Oct. 1946 [106992-7020].
206 Statement of Paul V. Myron, Dept. Dir. of the OAP, Feb. 20, 1953, U.S. Congress, Senate, Committee on the Judiciary, Administration of the Trading with the Enemy Act, 83rd Cong., 1st Sess., Hearings, Feb. - Apr., 1953, 106.
207 Term. Rpt., Office of APC, Oct. 1946, 5 [106992-7020].
208 Letter from Paul V. Myron to Congressman Arthur G. Klein, Aug. 10, 1956, AJA, WJC Papers, Box C294 .
209 Term. Rpt., Office of APC, Oct. 1946, 18 [106992-7020].
210 Ibid., 19 [106992-7020].
211 APC Annual Report, 1945, 9 [322954-3090] noted that by June 30, 1945, the APC had issued 5,226 vesting orders; the APC Annual Report, 1958, 82 [324619-718] noted a return order number 19,234 on Oct. 30, 1957. By the end of vesting on April 17, 1953, about 1,750 return orders had been issued. APC Annual Report, 1953, 147 .
212 All preceding information
in this paragraph is from the OAP Annual Reports,
1948 - 1958.
213 Malcolm Mason, "Relationship of Vested Assets to War Claims," Law and Contemporary Problems 16 (1951), 395.
214 Term. Rpt., Office of APC, Oct. 1946, 14 [106992-7020].
215 Mason, "Relationship of Vested Assets to War Claims," 398 - 399. Jessup, however, calls it a "dubious doctrine that there is no confiscation if the enemy state is required to assume an obligation to compensate its nationals whose property is held for the satisfaction of claims." Philip Jessup, "Enemy Property," American Journal of International Law 49 (1955), 62.
216 Jessup, "Enemy
Property," 58 - 59. See also the discussion of the Paris
in Reeves, "Is Confiscation of Enemy Assets in the National Interest of the United
217 Malcolm Mason, "Relationship of Vested Assets to War Claims," Law and Contemporary Problems 16 (1951), 397. According to Philip Jessup, aid to Germany amounted to $1.47 billion from 1948 to 1954--see his "Enemy Property," American Journal of International Law 49 (1955), 58--but amounted to $3.3 billion from a 1952 HICOG report, "nearly all of this aid represents the cost of procuring and shipping food, industrial raw material and like commodities to Germany." Cited in Reeves, "Is Confiscation of Enemy Assets in the National Interest of the United States?," 1044.
218 Mason, "Relationship of Vested Assets to War Claims," 398. Mason was chief of the legal branch in the Office of Alien Property in the Department of Justice.
219 Jessup, "Enemy Property," 59.
220 Mason, "Relationship of Vested Assets to War Claims," 403.
221 Mason, "Relationship of Vested Assets to War Claims," 400 - 02.
222 Mason, "Relationship of Vested Assets to War Claims," 404. Indeed, Mason states that the War Claims Commission "seriously delayed" a bill for the heirless property successor organization, and only withdrew later when it war argued that the sum involved would not be more than three million dollars.
223 Dulles, "The Vesting Powers of the Alien Property Custodian," 252.
224 The Annual Reports of the OAP list the dates and sometimes the amounts returned to the Netherlands.
225 The total number
of return orders was 4,475 by February 1965. OAP, Vesting
Orders 1 - 19,312 and Return Orders 1 - 4,475, Recs. Office Civil Division, Dept. of Justice.
Home / Contents | 1 | 6 |
<urn:uuid:9c3a4a87-ae7a-4eb8-90ba-3df4f8fc9905> | Learning Cocoa with Objective-C/Cocoa Overview and Foundation/Introduction to Cocoa
Cocoa provides a rich layer of functionality on which you can build applications. Its comprehensive object-oriented API complements a large number of technologies that Mac OS X provides. Some of these technologies are inherited from the NeXTSTEP operating system. Others are based on the BSD Unix heritage of Mac OS X's core. Still others come from the original Macintosh environment and have been updated to work with a modern operating system. In many cases, you take advantage of these underlying technologies transparently, and you get the use of them essentially "for free." In some cases, you might use these technologies directly, but because of the way Cocoa is structured, they are a simple and direct API call away.
This chapter provides an overview of the Mac OS X programming environment and Cocoa's place in it. You will then learn about the two frameworks—Foundation and Application Kit (or AppKit)—that make up the Cocoa API, as well as the functionality that they provide.
The Mac OS X Programming Environment
Mac OS X provides five principal application environments:
- A set of procedural APIs for working with Mac OS X. These interfaces were initially derived from the earlier Mac OS Toolbox APIs and modified to work with Mac OS X's protected memory environment and preemptive task scheduling. As a transitional API, Carbon gives developers a clear way to migrate legacy applications to Mac OS X without requiring a total rewrite. Adobe Photoshop 7.0 and Microsoft Office v. X are both examples of "Carbonized" applications. For more information on Carbon, see /Developer/Documentation/Carbon or Learning Carbon (O'Reilly).
- A set of object-oriented APIs derived from NeXT's operating-system technologies that take advantage of many features from Carbon. Programming with the Cocoa API is the focus of this book. Many applications that ship with Mac OS X, such as Mail and Stickies, are written in Cocoa. In addition, many of Apple's latest applications, such as iPhoto, iChat, and iDVD2, are built on top of Cocoa.
- A robust and fast virtual-machine environment for running applications developed using the Java Development Kit. Java applications are typically very portable and can run unchanged, without recompilation, on many different computing environments.
- BSD Unix
- The BSD layer of Mac OS X that provides a rich, robust, and mature set of tools and system calls. The standard BSD tools, utilities, APIs, and functions are available to applications. A command-line environment also exists as part of this layer.
- The compatibility environment in which the system runs applications originally written for Mac OS 8 or Mac OS 9 that have not been updated to take full advantage of Mac OS X. Classic is essentially a modified version of Mac OS 9 running inside a process that has special hooks into other parts of the operating system. Over time, Classic is becoming less interesting as more applications are ported to run natively in Mac OS X.
To some degree, all of these application environments rely on other parts of the system. Figure 1-1 gives a layered, albeit simplified, illustration of Mac OS X's application environments and their relationship to the other primary parts of the operating system.
As you can see from Figure 1-1, each of Mac OS X's application environments relies upon functionality provided by deeper layers of the operating system. This functionality is roughly broken into two major sections: Core Foundation, which provides a common set of application and core services to the Cocoa, Carbon, and Java frameworks, and the kernel environment, which is the underlying Unix-based core of the operating system.
Cocoa is an advanced object-oriented framework for building applications that run on Apple's Mac OS X. It is an integrated set of shared object libraries, a runtime system, and a development environment. Cocoa provides most of the infrastructure that graphical user applications typically need and insulates those applications from the internal workings of the core operating system.
Think of Cocoa as a layer of objects acting as both mediator and facilitator between programs that you build and the operating system. These objects span the spectrum from simple wrappers for basic types, such as strings and arrays, to complex functionality, such as distributed computing and advanced imaging. They are designed to make it easy to create a graphical user interface (GUI) application and are based on a sophisticated infrastructure that simplifies the programming task.
Cocoa-based applications are not just limited to using the features in the Cocoa frameworks. They can also use all of the functionality of the other frameworks that are part of Mac OS X, such as Quartz, QuickTime, OpenGL, ColorSync, and many others. And since Mac OS X is built atop Darwin, a solid BSD-based system, Cocoa-based applications can use all of the core Unix system functions and get as close to the underlying filesystem, network services, and devices as they need to.
The History of Cocoa
Cocoa has actually been around a long time—almost as long as the Macintosh itself. That is because it is, to a large extent, based on OpenStep, which was introduced to the world as NeXTSTEP in 1987, along with the elegant NeXT cube. At the time, the goal of NeXTSTEP was to, as only Steve Jobs could say, "create the next insanely great thing." It evolved through many releases, was adopted by many companies as their development and deployment environment of choice, and received glowing reviews in the press. It was, and continues to be, solid technology based on a design that was years ahead of anything else in the market.
NeXTSTEP was built on top of BSD Unix from UC Berkeley and the Mach microkernel from Carnegie-Mellon University. It utilized Display PostScript from Adobe — allowing the same code, using the PostScript page description language — to display documents on screen and to print to paper. NeXTSTEP came with a set of libraries, called "frameworks," and tools to enable programmers to build applications using the Objective-C language.
In 1993 NeXT exited the hardware business to concentrate on software. NeXTSTEP was ported to the Intel x86 architecture and released. Other ports were performed for the SPARC, Alpha, and PA-RISC architectures. Later, the frameworks and tools were revised to run on other operating systems, such as Windows and Solaris. These revised frameworks became known as OpenStep.
Fast forward to 1996. Apple had been working unsuccessfully on a next-generation operating system, known as Copland, to replace the venerable Mac OS 7. Their efforts were running amok and they decided to look outside for the foundation of the new OS. The leading contender seemed to be BeOS, but in a surprise move, Apple acquired NeXT, citing its strengths in development software and operating environments for both the enterprise and Internet markets. As part of this merger, Apple embarked on the development of Rhapsody, a development of the NeXTSTEP operating system fused with the classic Mac OS. Over the next five years, Rhapsody evolved into what was released as Mac OS X 10.0. As part of that evolution, OpenStep became Cocoa.
Mac OS X remains very much a Unix system; the Unix side of Mac OS X is just hidden from users unless they really want to use it. Its full power, however, is available to you, the programmer, to utilize. Not only can you take advantage of the power, you can actually look under the hood and see how it all works. The source code to the underpinnings of Mac OS X can be found as part of Apple's Darwin initiative (http://www.developer.apple.com/darwin).
Cocoa's Feature Set
At its foundation, Cocoa provides basic types such as strings and arrays, as well as basic functions such as byte swapping, parsing, and exception handling. Cocoa also provides utilities for memory management, utilities for archiving and serializing objects, and access to kernel entities and services such as tasks, ports, run loops, timers, threads, and locks.
On top of this foundation, Cocoa provides a set of user-interface widgets with quite a bit of built-in functionality. This functionality includes such expected things as undo and redo, drag and drop, and copy and paste, as well as lots of bonus features such as spell checking that can be enabled in any Cocoa component that accepts text. You will see how much of this functionality works while you work through the tutorials in this book.
- Imaging and printing
- Mac OS X's imaging and printing model is called Quartz and is based on Adobe's Portable Document Format (PDF). Unlike previous versions of Mac OS, the same code and frameworks are used to draw the onscreen image and to send output to printers. You'll get firsthand experience drawing with Quartz in Chapter 7, and with printing in Chapter 12.
Apple's color management and matching technology, ColorSync , is built into Quartz, ensuring that colors in documents are automatically color-corrected for any device on which they are printed or displayed. Any time an image is displayed in a Cocoa window or printed, its colors are automatically rendered correctly according to any color profile embedding in the image along with profiles for the display or printer.
- Internationalization and localization
- Cocoa's well-designed internationalization architecture allows applications to be localized easily into multiple languages. Cocoa keeps the user-interface elements separate from the executable, enabling multiple localizations to be bundled with an application. The underlying technology is the same that is used by Mac OS X to ship a single build of the OS with many localizations. This technology is covered in Chapter 14.
Because Cocoa uses Unicode as its native character set, applications can easily handle all the world's living languages. The use of Unicode eliminates many character-encoding hassles. To help you handle non-Unicode text, Cocoa provides functionality to help you translate between Unicode and the other major character sets in use today.
- Text and fonts
- Cocoa offers a powerful set of text services that can be readily adapted by text-intensive applications. These services include kerning, ligatures, tab formatting, and rulers, and they can support text buffers as large as the virtual memory space. The text system also supports embedded graphics and other inline attachments. You'll work this text system firsthand in Chapter 11.
Cocoa supports a variety of font formats, including the venerable Adobe PostScript (including Types 1, 3, and 42), the TrueType format defined by Apple in the late 1980s and adopted by Microsoft in Windows 3.1, and the new OpenType format, which merges the capabilities of both PostScript and TrueType.
- Exported application services
- Cocoa applications can make functionality available to other applications, as well as to end users, through two mechanisms: scripting with AppleScript and via Services.
AppleScript enables users to control applications directly on their system, including the operating system itself. Scripts allow even relatively unskilled users to automate common tasks and afford skilled scripters the ability to combine multiple applications to perform more complex tasks. For example, a script that executes when a user logs in could open the user's mail, look for a daily news summary message, and open the URLs from the summary in separate web-browser windows. Scripts have access to the entire Mac OS X environment, as well as other applications. For example, a script can launch the Terminal application, issue a command to list the running processes, and use the output for some other purpose.
Services, available as a submenu item of the application menu, allow users to use functionality of an application whenever they need to. For example, you can highlight some text in an application and choose the "Make New Sticky Note" service. This will launch the Stickies application (/Applications), create a new Sticky, and put the text of your selection into it. This functionality is not limited to text; it can work with any data type.
- Component technologies
- One of the key advantages of Cocoa as a development environment is its capability to develop programs quickly and easily by assembling reusable components. With the proper programming tools and a little work, you can build Cocoa components that can be packaged and distributed for use by others. End-user applications are the most familiar use of this component technology in action. Other examples include the following:
- Bundles containing executable code and associated resources that programs can load dynamically
- Frameworks that other developers can use to create programs
- Palettes containing custom user-interface objects that other developers can drag and drop into their own user interfaces
Cocoa's component architecture allows you to create and distribute extensions and plug-ins easily for applications. In addition, this component architecture enables Distributed Objects, a distributed computing model that takes unique advantage of Cocoa's abilities.
The Cocoa Frameworks
Cocoa is composed of two object-oriented frameworks: Foundation (not to be confused with Core Foundation) and Application Kit. These layers fit into the system as shown in Figure 1-2.
The classes in Cocoa's Foundation framework provide objects and functionality that are the basis, or "foundation," of Cocoa and that do not have an impact on the user interface. The AppKit classes build on the Foundation classes and furnish the objects and behavior that your users see in the user interface, such as windows and buttons; the classes also handle things like mouse clicks and keystrokes. One way to think of the difference in the frameworks is that Cocoa's Foundation classes provide functionality that operates under the surface of the application, while the AppKit classes provide the functionality for the user interface that the user sees.
You can build Cocoa applications in three languages: Objective-C, Java, and AppleScript. Objective-C was the original language in which NeXTSTEP was developed and is the "native language" of Cocoa. It is the language that we will work with throughout this book. During the early development of Mac OS X (when it was still known as Rhapsody), a layer of functionality—known as the Java Bridge—was added to Cocoa, allowing the API to be used with Java. Support has been recently added for AppleScript in the form of AppleScript Studio, which allows AppleScripters to hook into the Cocoa frameworks to provide a comprehensive Aqua-based GUI to their applications.
The brainchild of Brad Cox, Objective-C is a very simple language. It is a superset of ANSI C with a few syntax and runtime extensions that make object-oriented programming possible. It started out as just a C preprocessor and a library, but over time developed into a complete runtime system, allowing a high degree of dynamism and yielding large benefits. Objective-C's syntax is uncomplicated, adding only a small number of types, preprocessor directives, and compiler directives to the C language, as well as defining a handful of conventions used to interact with the runtime system effectively.
Objective-C is a very dynamic language. The compiler throws very little information away, which allows the runtime to use this information for dynamic binding and other uses. We'll be covering the basics of Objective-C in Chapter 3. Also, there is a complete guide to Objective-C, Inside Mac OS X: The Objective-C Language, included as part of the Mac OS X Developer Tools installation. You can find this documentation in the /Developer/Documentation/Cocoa/ObjectiveC folder.
Java is a cross-platform, object-oriented, portable, multithreaded, dynamic, secure, and thoroughly buzzword-compliant programming language developed by James Gosling and his team at Sun Microsystems in the 1990s. Since its introduction to the public in 1995, Java has gained a large following of programmers and has become a very important language in enterprise computing.
Cocoa provides a set of language bindings that allow you to program Cocoa applications using Java. Apple provides Java packages corresponding to the Foundation and Application Kit frameworks. Within reason, you can mix the APIs from the core Java packages (except for the Swing and AWT APIs) with Cocoa's packages.
For many years, AppleScript has provided an almost unmatched ability to control applications and many parts of the core Mac OS. This allows scripters to set up workflow solutions that combine the power of many applications. AppleScript combines an English-like language with many powerful language features, including list and record manipulation. The introduction of AppleScript Studio in December 2001, as well as its final release along with Mac OS X 10.2, allows scripters the ability to take their existing knowledge of AppleScript and build Cocoa-based applications quickly using Project Builder and Interface Builder.
Coverage of AppleScript Studio is beyond the scope of this book. To learn more about AppleScript Studio, see Building Applications with AppleScript Studio located in /Developer/Documentation/CoreTechnologies/AppleScriptStudio/BuildApps_AppScrptStudio.
The Foundation Framework
The Foundation framework is a set of over 80 classes and functions that define a layer of base functionality for Cocoa applications. In addition, the Foundation framework provides several paradigms that define consistent conventions for memory management and traversing collections of objects. These conventions allow you to code more efficiently and effectively by using the same mechanisms with various kinds of objects. Two examples of these conventions are standard policies for object ownership (who is responsible for disposing of objects) and a set of standard abstract classes that enumerate over collections. Figure 1-3 shows the major groupings into which the Foundation classes fall.
The Foundation framework includes the following:
- The root object class, NSObject
- Classes representing basic data types, such as strings and byte arrays
- Collection classes for storing other objects
- Classes representing system information and services
Programming Types and Operations
The Foundation framework provides many basic types, including strings and numbers. It also furnishes several classes whose purpose is to hold other objects—the array and dictionary collections classes. You'll learn more about these data types—and how to use them—throughout the chapters in this book, starting in Chapter 4.
- Cocoa's string class, NSString, supplants the familiar C programming data type char * to represent character string data. String objects contain Unicode characters rather than the narrow range of characters afforded by the ASCII character set, allowing them to contain characters in any language, including Chinese, Arabic, and Hebrew. The string classes provide an API to create both mutable and immutable strings and to perform string operations such as substring searching, string comparison, and concatenation.
String scanners take strings and provide methods for extracting data from them. While scanning, you can change the scan location to rescan a portion of the string or to skip ahead a certain number of characters. Scanners can also consider or ignore case.
- Collections allow you to organize and retrieve data in a logical manner. The collections classes provide arrays using zero-based indexing, dictionaries using key-value pairs, and sets that can contain an unordered collection of distinct or nondistinct elements.
The collection classes can grow dynamically, and they come in two forms: mutable and immutable. Mutable collections, as their name suggests, can be modified programmatically after the collection is created. Immutable collections are locked after they are created and cannot be changed.
- Data and values
- Data and value objects let simple allocated buffers, scalar types, pointers, and structures be treated as first-class objects. Data objects are object-oriented wrappers for byte buffers and can wrap data of any size. When the data size is more than a few memory pages, virtual memory management can be used. Data objects contain no information about the data itself, such as its type; the responsibility for how to use the data lies with the programmer.
For typed data, there are value objects. These are simple containers for a single data item. They can hold any of the scalar types, such as integers, floats, and characters, as well as pointers, structures, and object addresses, and allow object-oriented manipulation of these types. They can also provide functionality such as arbitrary precision arithmetic.
- Dates and times
- Date and time classes offer methods for calculating temporal differences, displaying dates and times in any desired format, and adjusting dates and times based on location (i.e., time zone).
- Exception handling
- An exception is a special condition that interrupts the normal flow of program execution. Exceptions let programs handle exceptional error conditions in a graceful manner. For example, an application might interpret saving a file in a write-protected directory as an exception and provide an appropriate alert message to the user.
Operating System Entities and Services
The Foundation framework provides classes to access core operating-system functionality such as locks, threads, and timers. These services all work together to create a robust environment in which your application can run.
- Run loops
- The run loop is the programmatic interface to objects managing input sources. A run loop processes input for sources such as mouse and keyboard events from the window system, ports, timers, and other connections. Each thread has a run loop automatically created for it. When an application is started, the run loop in the default thread is started automatically. Run loops in threads that you create must be started manually. We'll talk about run loops in detail in Chapter 8.
- The notification-related classes implement a system for broadcasting notifications of changes within an application. An object can specify and post a notification, and any other object can register itself as an observer of that notification. This topic will also be covered in Chapter 8.
- A thread is an executable unit that has its own execution stack and is capable of independent input/output (I/O). All threads share the virtual-memory address space and communication rights of their task. When a thread is started, it is detached from its initiating thread and runs independently. Different threads within the same task can run on different CPUs in systems with multiple processors.
- A lock is used to coordinate the operation of multiple threads of execution within the same application. A lock can be used to mediate access to an application's global data or to protect a critical section of code, allowing it to run atomically—meaning that, at any given time, only one of the threads can access the protected resource.
- Using tasks, your program can run another program as a subprocess and monitor that program's execution. A task creates a separate executable entity; it differs from a thread in that it does not share memory space with the process that creates it.
- A port represents a communication channel to or from another port that typically resides in a different thread or task. These communication channels are not limited to a single machine, but can be distributed over a networked environment.
- Timers are used to send a message to an object at specific intervals. For example, you could create a timer to tell a window to update itself after a certain time interval. You can think of a timer as the software equivalent of an alarm clock.
The Foundation framework provides the functionality to manage your objects—from creating and destroying them to saving and sharing them in a distributed environment.
- Memory management
- Memory management ensures that objects are properly deallocated when they are no longer needed. This mechanism, which depends on general conformance to a policy of object ownership, automatically tracks objects that are marked for release and deallocates them at the close of the current run loop. Understanding memory management is important in creating successful Cocoa applications. We'll discuss this critical topic in depth in Chapter 4.
- Serialization and archiving
- Serializers make it possible to represent the data that an object contains in an architecture-independent format, allowing the sharing of data across applications. A specialized serializer, known as a Coder, takes this process a step further by storing class information along with the object. Archiving stores encoded objects and other data in files, to be used in later runs of an application or for distribution. This topic will also be covered in depth in Chapter 4.
- Distributed objects
- Cocoa provides a set of classes that build on top of ports and enable an interprocess messaging solution. This mechanism enables an application to make one or more of its objects available to other applications on the same machine or on a remote machine. Distributed objects are an advanced topic and are not covered in this book. For more information about distributed objects, see /Developer/Documentation/Cocoa/TasksAndConcepts/ProgrammingTopics/DistrObjects/index.html.
File and I/O Management
Filesystem and input/output (I/O) functionality includes URL handling, file management, and dynamic loading of code and localized resources.
- File management
- Cocoa provides a set of file-management utilities that allow you to create directories and files, extract the contents of files as data objects, change your current working location in the filesystem, and more. Besides offering a useful range of functionality, the file-management utilities insulate an application from the underlying filesystem, allowing the same functionality to be used to work with files on a local hard drive, a CD-ROM, or across a network.
- URL handling
- URLs and the resources they reference are accessible. URLs can be used to refer to files and are the preferred way to do so. Cocoa objects that can read or write data from or to a file can usually accept a URL, in addition to a pathname, as the file reference.
The Foundation framework provides the ability to manage user preferences, the undo and redo of actions, data formatting, and localization to many languages. Cocoa applications can also be made responsive to AppleScript commands.
The Application Kit Framework
The Application Kit framework (or AppKit, as it's more commonly called) contains a set of over 120 classes and related functions that are needed to implement graphical, event-driven user interfaces. These classes implement the functionality needed to efficiently draw the user interface to the screen, communicate with video cards and screen buffers, and handle events from the keyboard and mouse.
Learning the many classes in the AppKit may seem daunting at first. However, you won't need to learn every feature of every class. Most of the AppKit classes are support classes that work behind the scenes helping other classes operate and with which you will not have to interact directly. Figure 1-4 shows how AppKit classes are grouped and related.
The user interface is how users interact with your application. You can create and manage windows, dialog boxes, pop-up lists, and other controls. We'll cover these topics in depth starting in Chapter 6.
- The two principle functions of a window are to provide an area in which views can be placed and to accept and distribute to the appropriate view events that the user creates through actions with the mouse and keyboard. Windows can be resized, minimized to the Dock, and closed. Each of these actions generates events that can be monitored by a program.
- A view is an abstract representation for all objects displayed in a window. Views provide the structure for drawing, printing, and handling events. Views are arranged within a window in a nested hierarchy of subviews.
- Panels are a type of window used to display transient, global, or important information. For example, a panel should be used, rather than a window, to display error messages or to query the user for a response to remarkable or unusual circumstances.
The Application Kit implements some common panels for you, such as the Save, Open, and Print panels. These common panels give the user a consistent look and feel for performing common operations.
- Controls and widgets
- Cocoa provides a common set of user-interface objects such as buttons, sliders, and browsers, which you can manipulate graphically to control some aspect of your application. Just what a particular item does is up to you. Cocoa provides menus, cursors, tables, buttons, sheets, sliders, drawers, and many other widgets.
As you'll find throughout this book, the Cocoa development tools provide quite a lot of assistance in making your applications behave according to Apple's Human Interface Guidelines. If you are interested in the details of these guidelines, read the book Inside Mac OS X: Aqua Human Interface Guidelines , commonly known as the "HIG." You can find a local copy of the HIG in /Developer/Documentation/Essentials/AquaHIGuidelines/AquaHIGuidelines.pdf.
The AppKit gives your applications ways to integrate and manage colors, fonts, and printing, and it even provides the dialog boxes for these features.
- Text and fonts
- Text can be entered into either simple text fields or into larger text views. Text fields allow entry for a single line of text, while a text view is something that you might find in a text-editing application. Text views also add the ability to format text with a variety of fonts and styles. We'll see the text-handling capabilities of Cocoa in action in Chapter 11.
- Images encapsulate graphics data, allowing you easy and efficient access to images stored in files on the disk and displayed on the screen. Cocoa handles many of the standard image formats, such as JPG, TIFF, GIF, PNG, PICT, and many more. We'll work a bit with images in Chapter 13.
- Color is supported by a variety of classes representing colors and color views. There is a rich set of color formats and representations that automatically handle different color spaces. The color support classes define and present panels and views that allow the user to select and apply colors.
The AppKit provides a number of other facilities that allow you to create a robust application that takes advantage of all the features your users expect from an application on Mac OS X.
- Document architecture
- Document-based applications, such as word processors, are some of the more common types of applications developed. In contrast to applications such as iTunes, that need only a single window to work, document-based applications require sophisticated window-management capabilities. Various Application Kit classes provide an architecture for these types of applications, simplifying the work you must do. These classes divide and orchestrate the work of creating, saving, opening, and managing the documents of an application. We'll cover the document architecture in depth in Chapter 10.
- The printing classes work together to provide the means for printing the information displayed in your application's windows and views. You can also create a PDF representation of a view. You'll see how to print in Chapter 12.
- The pasteboard is a repository for data that is copied from your application, and it makes this data available to any application that cares to use it. The pasteboard implements the familiar cut-copy-paste and drag-and-drop operations. Programmers familiar with Mac OS 9 or Carbon will recognize this functionality as the "Clipboard."
- With very little programming on your part, objects can be dragged and dropped anywhere. The Application Kit handles all the details of tracking the mouse and displaying a dragged representation of the data.
- Accessing the filesystem
- File wrappers correspond to files or directories on disk. A file wrapper holds the contents of a file in memory so it can be displayed, changed, or transmitted to another application. It also provides an icon for dragging the file or representing it as an attachment. The Open and Save panels also provide a convenient and familiar interface to the filesystem.
- A built-in spell server provides spellchecking facilities for any application that wants it, such as word processors, text editors, and email applications. Any text field or text view can provide spellchecking by using this service. We'll enable spellchecking in an application we build in Chapter 10.
- ↑ Contrary to what you may have heard elsewhere, Carbon is not doomed to fade away over time. This erroneous opinion seems to be caused by a misinterpretation of the word "transitional" to mean that the API itself will be going away, rather than meaning it is the API to use to transition older applications. Moving forward, it will remain one of the core development environments for Mac OS X. In fact, Apple engineers are striving to enable better integration between Carbon and Cocoa.
- ↑ BSD stands for Berkeley Software Distribution. For more information about BSD and its variants, see http://www.bsd.org/.
- ↑ Mac OS X 10.2 ships with localizations in the following languages: English, German, French, Dutch, Italian, Spanish, Japanese, Brazilian, Danish, Finnish, Korean, Norwegian, Swedish, and both Simplified and Traditional Chinese. Apple might add to or modify this list at any time. | 1 | 3 |
<urn:uuid:2d62a205-c610-4ebe-83c3-7ae76f656e2a> | Preventing Tetanus, Diphtheria, and Pertussis Among Adults:
Use of Tetanus Toxoid, Reduced Diphtheria Toxoid and
Acellular Pertussis Vaccine
Recommendations of the Advisory Committee on Immunization
Practices (ACIP) and Recommendation of ACIP, supported by the Healthcare
Infection Control Practices Advisory Committee (HICPAC), for Use of Tdap
Among Health-Care Personnel
Katrina Kretsinger, MD,1,6 Karen R. Broder,
MD,1,6 Margaret M. Cortese,
MD,2,6 M. Patricia Joyce, MD,
1 Ismael Ortega-Sanchez, PhD,2 Grace
M. Lee, MD,3 Tejpratap Tiwari, MD,1
Amanda C. Cohn, MD, 1,5,6 Barbara A. Slade,
MD,1 John K. Iskander,
MD,4,6 Christina M. Mijalski, MPH,1
Kristin H. Brown,1 Trudy V. Murphy,
1Division of Bacterial Diseases (proposed)
2Division of Viral Diseases (proposed), National Center for Immunization and Respiratory Diseases (proposed), CDC
3Harvard Medical School, Harvard Pilgrim Health Care & Children's Hospital Boston
4Office of the Chief Science Officer, Office of the Director, CDC
5EIS/Career Development Division, Office of Workforce and Career Development, CDC
6Commissioned Corps of the United States Public Health Service
The material in this report originated in the National Center for Immunization and Respiratory Diseases (proposed), Anne Schuchat, MD,
Director; Division of Bacterial Diseases (proposed), Alison Mawle, PhD, (Acting) Director, and the Office of the Chief Science Officer, Tanja Popovic,
MD, (Acting) Chief Science Officer; and Immunization Safety Office, Robert Davis, MD, Director.
Corresponding preparer: Katrina Kretsinger, MD, National Center for Immunization and Respiratory Diseases (proposed), CDC, 1600 Clifton
Road NE, MS C-25, Atlanta, GA 30333. Telephone: 404-639-8544; Fax: 404-639-8616; Email: [email protected].
On June 10, 2005, a tetanus toxoid, reduced diphtheria toxoid and acellular pertussis vaccine (Tdap) formulated for use
in adults and adolescents was licensed in the United States for persons aged 11--64 years
(ADACEL®, manufactured by sanofi pasteur, Toronto, Ontario, Canada). Prelicensure studies demonstrated safety and efficacy, inferred through
immunogenicity, against tetanus, diphtheria, and pertussis when Tdap was administered as a single booster dose to adults. To reduce
pertussis morbidity among adults and maintain the standard of care for tetanus and diphtheria prevention and to reduce
the transmission of pertussis to infants and in health-care settings, the Advisory Committee on Immunization Practices
(ACIP) recommends that: 1) adults aged 19--64 years should receive a single dose of Tdap to replace tetanus and diphtheria
toxoids vaccine (Td) for booster immunization against tetanus, diphtheria, and pertussis if they received their last dose of Td
>10 years earlier and they have not previously received Tdap; 2) intervals shorter than 10 years since the last Td may be used
for booster protection against pertussis; 3) adults who have or who anticipate having close contact with an infant aged <12
months (e.g., parents, grandparents aged <65 years, child-care providers, and health-care personnel) should receive a single dose
of Tdap to reduce the risk for transmitting pertussis. An interval as short as 2 years from the last Td is suggested; shorter
intervals can be used. When possible, women should receive Tdap before becoming pregnant. Women who have not previously
received Tdap should receive a dose of Tdap in the immediate postpartum period; 4) health-care personnel who work in hospitals
or ambulatory care settings and have direct patient contact should receive a single dose of Tdap as soon as feasible if they have
not previously received Tdap. An interval as short as 2 years from the last dose of Td is recommended; shorter intervals may
be used. These recommendations for use of Tdap in health-care personnel are supported by the Healthcare Infection
Control Practices Advisory Committee (HICPAC). This statement 1) reviews pertussis, tetanus and diphtheria vaccination policy in
the United States; 2) describes the clinical features and epidemiology of pertussis among adults; 3) summarizes
the immunogenicity, efficacy, and safety data of Tdap; and 4) presents recommendations for the use of Tdap among adults
aged 19--64 years.
Pertussis is an acute, infectious cough illness that remains endemic in the United States despite longstanding
routine childhood pertussis vaccination (1). Immunity to pertussis wanes approximately 5--10 years after completion of
childhood vaccination, leaving adolescents and adults susceptible to pertussis
(2--7). Since the 1980s, the number of reported
pertussis cases has steadily increased, especially among adolescents and adults (Figure). In 2005, a total of 25,616 cases of
pertussis were reported in the United States
(8). Among the reportable bacterial vaccine-preventable diseases in the United States
for which universal childhood vaccination has been recommended, pertussis is the least well controlled
In 2005, a tetanus toxoid, reduced diphtheria toxoid and acellular pertussis vaccine, adsorbed (Tdap) product
formulated for use in adults and adolescents was licensed in the United States for persons aged 11--64 years
(ADACEL®, sanofi pasteur, Toronto, Ontario, Canada)
(11). The Advisory Committee on Immunization Practices (ACIP) reviewed evidence
and considered the use of Tdap among adults in public meetings during June 2005--February 2006. On October 26, 2005,
ACIP voted to recommend routine use of Tdap among adults aged 19--64 years. For adult contacts of infants, ACIP
recommended Tdap at an interval as short as 2 years since the previous Td. On February 22, 2006, ACIP recommended Tdap for
health-care personnel (HCP), also at an interval as short as 2 years since the last Td. This report summarizes the rationale
and recommendations for use of Tdap among adults in the United States. Recommendations for the use of Tdap
among adolescents are discussed elsewhere (12).
Pertussis Vaccination Policy
In the United States during 1934--1943, an annual average of 200,752 pertussis cases and 4,034 pertussis-related deaths
were reported (13,14; Sirotkin B, CDC, personal
communication, 2006). Although whole cell pertussis vaccines became available
in the 1920s (15), they were not routinely recommended for children until the 1940s after they were combined with diphtheria
and tetanus toxoids (DTP) (16,17). The number of reported pertussis cases declined dramatically following introduction of
universal childhood pertussis vaccination
Pediatric acellular pertussis vaccines (i.e., diphtheria and tetanus toxoids and acellular pertussis antigens [DTaP]),
less reactogenic than the earlier whole-cell vaccines, were first licensed for use in children in 1991
(18,19). ACIP recommended that pediatric DTaP replace all pediatric DTP doses in 1997
In 2005, two Tdap products were licensed for use in single doses in the United States
(11,20). BOOSTRIX® (GlaxoSmithKline Biologicals, Rixensart, Belgium) is licensed only for adolescents aged 10--18 years.
ADACEL® (sanofi pasteur, Toronto, Ontario, Canada) is licensed for adolescents and adults aged 11--64 years. ACIP has recommended
that adolescents aged 11--18 years receive a single dose of either Tdap product instead of adult tetanus and diphtheria
toxoids (Td) for booster immunization against tetanus, diphtheria, and pertussis if they have completed the recommended
childhood DTP or DTaP vaccination series and have not received Td or Tdap; age 11--12 years is the preferred age for the
adolescent Tdap dose (12).
One of the Tdap vaccines,
ADACEL® (sanofi pasteur) is licensed for use in adults and adolescents
(11). All references to Tdap in this report refer to the sanofi pasteur product unless otherwise indicated. Tdap is licensed for 1-dose
administration (i.e., not for subsequent decennial booster doses or subsequent wound prophylaxis). Prelicensure studies on the safety
or efficacy of subsequent doses were not conducted. No vaccine containing acellular pertussis antigens alone (i.e.,
without tetanus and diphtheria toxoids) is licensed in the United States. Acellular pertussis vaccines formulated with tetanus
and diphtheria toxoids have been available for use among adolescents and adults in other countries, including Canada,
Australia and an increasing number of European countries (e.g., France, Austria and Germany)
The efficacy against pertussis of an adolescent and adult acellular pertussis (ap) vaccine with the same pertussis antigens
as those included in BOOSTRIX® (without tetanus and diphtheria toxoids) was evaluated among 2,781 adolescents and
adults in a prospective, randomized trial in the United States
(28). Persons aged 15--64 years were randomized to receive one dose
of ap vaccine or hepatitis A vaccine
(Havrix®, GlaxoSmithKline Biologicals, Rixensart, Belgium). The primary outcome
measure was confirmed pertussis, defined as a cough illness lasting
>5 days with laboratory evidence of Bordetella
pertussis infection by culture, polymerase chain reaction (PCR), or paired serologic testing results (acute and convalescent). Nine persons in
the hepatitis A vaccine control group and one person in the ap vaccine group had confirmed pertussis during the study
vaccine efficacy against confirmed pertussis was 92% (95% confidence interval [CI] = 32%--99%)
(28). Results of this study were not considered in evaluation of Tdap for licensure in the
Objectives of Adult Pertussis Vaccination Policy
The availability of Tdap for adults offers an opportunity to reduce the burden of pertussis in the United States.
The primary objective of replacing a dose of Td with Tdap is to protect the vaccinated adult against pertussis. The
secondary objective of adult Tdap vaccination is to reduce the reservoir of pertussis in the population at large, and thereby potentially
1) decrease exposure of persons at increased risk for complicated infection (e.g., infants), and 2) reduce the cost and
disruption of pertussis in health-care facilities and other institutional settings.
Pertussis is an acute respiratory infection caused by
B. pertussis, a fastidious gram-negative coccobacillus. The
organism elaborates toxins that damage respiratory epithelial tissue and have systemic effects, including promotion of
lymphocytosis (29). Other species of bordetellae, including
B. parapertussis and less commonly B. bronchiseptica
or B. holmseii, are associated with cough illness; the clinical presentation of
B. parapertussis can be similar to that of classic pertussis. Illness caused
by species of bordetellae other than B.
pertussis is not preventable by available vaccines
Pertussis is transmitted from person to person through large respiratory droplets generated by coughing or sneezing.
The usual incubation period for pertussis is 7--10 days (range: 5--21 days)
(16,31,32). Patients with pertussis are most
infectious during the catarrhal and early paroxysmal phases of illness and can remain infectious for
>6 weeks (16,31,32). The
infectious period is shorter, usually <21 days, among older children and adults with previous vaccination or infection. Patients
with pertussis are highly infectious; attack rates among exposed, nonimmune household contacts are as high as
Factors that affect the clinical expression of pertussis include age, residual immunity from previous vaccination or
infection, and use of antibiotics early in the course of the illness before the cough onset
(32). Antibiotic treatment generally does
not modify the course of the illness after the onset of cough but is recommended to prevent transmission of the infection
(34--39). For this reason, vaccination is the most effective strategy for preventing the morbidity of pertussis.
Detailed recommendations on the indications and schedules for antimicrobials are published separately
Clinical Features and Morbidity Among Adults with Pertussis
B. pertussis infection among adults covers a spectrum from mild cough illness to classic pertussis; infection also can
be asymptomatic in adults with some level of immunity. When the presentation of pertussis is not classic, the cough illness
can be clinically indistinguishable from other respiratory illnesses. Classic pertussis is characterized by three phases of
illness: catarrhal, paroxysmal, and convalescent
(16,32). During the catarrhal phase, generally lasting 1--2 weeks, patients
experience coryza and intermittent cough; high fever is uncommon. The paroxysmal phase lasts 4--6 weeks and is characterized
by spasmodic cough, posttussive vomiting, and inspiratory whoop
(16). Adults with pertussis might experience a
protracted cough illness with complications that can require hospitalization. Symptoms slowly improve during the convalescent
phase, which usually lasts 2--6 weeks, but can last for months (Table 1)
Prolonged cough is a common feature of pertussis. In studies of adults with pertussis, the majority coughed for
>3 weeks and some coughed for many months (Table 1). Because of the prolonged illness, some adults undergo extensive
medical evaluations by providers in search of a diagnosis, if pertussis is not considered. Adults with pertussis often make
repeated visits for medical care. Of 2,472 Massachusetts adults with pertussis during 1988--2003, a total of 31% had one, 31%
had two, 35% had three or more medical visits during their illness; data were not available for
3% (Massachusetts Department of Public Health, unpublished data, 2005). Similarly, adults in Australia with pertussis reported a mean of 3.7 medical visits
for their illness, and adults in Quebec visited medical providers a mean of 2.5 times
(40,41). Adults with pertussis miss work:
in Massachusetts, 78% of 158 employed adults with pertussis missed work for a mean of 9.8 days (range: 0.1--180 days);
Quebec, 67% missed work for a mean of 7 days; in Sweden, 65% missed work and 16% were unable to work for more than
1 month; in Australia, 71% missed work for a mean of 10 days (range: 0--93 days) and 10% of working adults missed
more than 1 month (40--43).
Adults with pertussis can have complications and might require hospitalization. Pneumonia has been reported in up to
5% and rib fracture from paroxysmal coughing in up to 4% (Table 2); up to 3% were hospitalized (12% in older adults). Loss
of consciousness (commonly "cough syncope") has been reported in up to 3% and 6% of adults with pertussis
(41,42). Urinary incontinence was commonly reported among women in studies that inquired about this feature
(41,42). Anecdotal reports from the literature describe other complications associated with pertussis in adults. In addition to rib fracture, cough
syncope, and urinary incontinence, complications arising from high pressure generated during coughing attacks include
pneumothorax (43), aspiration, inguinal hernia
(44), herniated lumbar disc (45), subconjunctival hemorrhage
(44), and one-sided hearing loss (43). One patient was reported to have carotid dissection
(46). In addition to pneumonia, other respiratory
tract complications include sinusitis (41), otitis media
(41,47), and hemoptysis (48). Neurologic and other complications
attributed to pertussis in adults also have been described, such as pertussis encephalopathy (i.e., seizures triggered by only minor
coughing episodes) (49), migraine exacerbation
(50), loss of concentration/memory
(43), sweating attacks (41), angina
(43), and severe weight loss (41).
Whether adults with co-morbid conditions are at higher risk for having pertussis or of suffering its complications
is unknown. Adults with cardiac or pulmonary disease might be at risk for poor outcomes from severe coughing paroxysms
or cough syncope (41,51). Two case reports of pertussis in human immunodeficiency virus (HIV)-infected adults (one
patient with acquired immunodeficiency syndrome [AIDS]) described prolonged cough illnesses and dyspnea in these patients,
but no complications (52,53).
During 1990--2004, five pertussis-associated deaths among U.S. adults were reported to CDC. The patients were aged
49--82 years and all had serious underlying medical conditions (e.g., severe diabetes, severe multiple sclerosis with
asthma, multiple myeloma on immunosuppressive therapy, myelofibrosis, and chronic obstructive pulmonary disease)
(54,55; CDC, unpublished data, 2005). In an outbreak of pertussis among older women in a religious institution in The Netherlands,
four of 75 residents were reported to have suffered pertussis-associated deaths. On the basis of clinical assessments, three of
the four deaths were attributed to intracranial hemorrhage during pertussis cough illnesses that had lasted >100 days
Infant Pertussis and Transmission to Infants
Infants aged <12 months are more likely to suffer from pertussis and pertussis-related deaths than older age
groups, accounting for approximately 19% of nationally reported pertussis cases and 92% of the pertussis deaths in the United
States during 2000--2004. An average of 2,435 cases of pertussis were reported annually among infants aged <12 months, of
whom 43% were aged <2 months (CDC, unpublished data, 2005). Among infants aged <12 months reported with pertussis
for whom information was available, 63% were hospitalized and 13% had radiographically confirmed pneumonia (Table 3).
Rates of hospitalization and complications increase with decreasing age. Young infants, who can present with symptoms
of apnea and bradycardia without cough, are at highest risk for death from pertussis
(55). Of the 100 deaths from pertussis during 2000--2004, a total of 76 occurred among infants aged 0--1 month at onset of illness, 14 among infants aged
2--3 months, and two among infants aged 4--11 months. The case-fatality ratio among infants aged <2 months was 1.8%. A
study of pertussis deaths in the 1990s suggests that Hispanic infants and infants born at gestational age <37 weeks comprise a
larger proportion of pertussis deaths than would be expected on the basis of population estimates
(54). Two to 3 doses of pediatric DTaP (recommended at ages 2, 4, and 6 months) provide protection against severe pertussis
Although the source of pertussis in infants often is unknown, adult close-contacts are an important source when a source
is identified. In a study of infants aged <12 months with pertussis in four states during 1999--2002, parents were asked
about cough illness in persons who had contact with the infant
(58). In 24% of cases, a cough illness in the mother, father,
or grandparent was reported (Table 4).
Pertussis diagnosis is complicated by limitations of diagnostic tests for pertussis. Certain factors affect the
sensitivity, specificity, and interpretation of these tests, including the stage of the disease, antimicrobial administration,
vaccination, the quality of technique used to collect the specimen, transport conditions to the testing laboratory, experience
of the laboratory, contamination of the sample, and use of nonstandardized tests
(59,60). In addition, tests and specimen collection materials might not be readily available to practicing clinicians.
Isolation of B. pertussis by culture is 100% specific; however, sensitivity of culture varies because fastidious
growth requirements make it difficult to transport and isolate the organism. Although the sensitivity of culture can reach
80%--90% under optimal conditions, in practice, sensitivity typically ranges from 30% to 60%
(61). The yield of B. pertussis from culture declines in specimens taken after 2 or more weeks of cough illness, after antimicrobial treatment, or after
previous pertussis vaccination (62). Three weeks after onset of cough, culture is only 1%--3% sensitive
(63). Although B. pertussis can be isolated in culture as early as 72 hours after plating, 1--2 weeks are required before a culture result can definitively be
called negative (64). Culture to isolate
B. pertussis is essential for antimicrobial susceptibility testing, molecular subtyping,
and validation of the results of other laboratory assays.
Direct fluorescent antibody (DFA) tests provide results in hours, but are generally less sensitive (sensitivity:
10%--50%) than culture. With use of monoclonal reagents, the specificity of DFA should be >90%; however, the interpretation of the
test is subjective, and misinterpretation by an inexperienced microbiologist can result in lower specificity
(65). Because of the limitations of DFA testing, CDC does not recommend its use.
Because of increased sensitivity and shorter turn-around-time, DNA amplification (e.g., PCR) is being used
more frequently to detect B. pertussis. When symptoms of classic pertussis are present (e.g., 2 weeks of paroxysmal cough),
PCR typically is 2--3 times more likely than culture to detect
B. pertussis in a positive sample
(59,66,67). The definitive classification of a PCR-positive, culture-negative sample as either a true positive or a false positive might not be possible.
No Food and Drug Administration (FDA)-licensed PCR test kit and no national standardized protocols, reagents, and
reporting formats are available. Approximately 100 different PCR protocols have been reported. These vary by DNA
purification techniques, PCR primers, reaction conditions, and product detection methods
(66). Laboratories must develop and
validate their own PCR tests. As a result, the analytical sensitivity, accuracy, and quality control of PCR-based
B. pertussis tests can vary widely among laboratories. The majority of laboratory validation studies have not sufficiently established the
predictive value of a positive PCR test to diagnose pertussis
(66). Use of PCR tests with low specificity can result in
unnecessary investigation and treatment of persons with false-positive PCR test results and inappropriate chemoprophylaxis of
their contacts (66). CDC/Council of State and Territorial Epidemiologists (CSTE) reporting guidelines support the use of PCR
to confirm the diagnosis of pertussis only when the case also meets the clinical case definition
(>2 weeks of cough with paroxysms, inspiratory "whoop," or posttussive vomiting
(68,69) (Appendix B).
Diagnosis of pertussis by serology generally requires demonstration of a substantial change in titer for pertussis
antigens (usually fourfold) when comparing results from acute
(<2 weeks after cough onset) and convalescent sera
(>4 weeks after the acute sample). The results of serologic tests on paired sera usually become available late in the course of illness. A
single sample serologic assay with age-specific antibody reference values is used as a diagnostic test for adolescents and adults
in Massachusetts but is not available elsewhere
(70). Other single sample serologic assays lack standardization and do not
clearly differentiate immune responses to pertussis antigens following recent disease, from more remote disease, or from
vaccination (30). None of these serologic assays, including the Massachusetts assay, is licensed by FDA for routine diagnostic use in
the United States. For these reasons, CDC guidelines for laboratory confirmation of pertussis cases do not include
The only pertussis diagnostic tests that the CDC endorses are culture and PCR (when the CDC/CSTE clinical
case definition is also met) (Appendix B). CDC-sponsored studies are under way to evaluate both serology and PCR testing.
CDC guidance on the use of pertussis diagnostics will be updated as results of these studies become available.
Burden of Pertussis Among Adults
National Passive Surveillance
Pertussis has been a reportable disease in the United States since 1922
(71). State health departments report confirmed
and probable cases of pertussis to CDC through the passive National Notifiable Disease Surveillance System
(NNDSS); additional information on reported cases is collected through the Supplemental Pertussis Surveillance System
(Appendix B) (72,73). National passive reports provide information on the national burden of pertussis and are used
to monitor national trends in pertussis over time.
After the introduction of routine vaccination against pertussis in the late 1940s, the number of national pertussis
reports declined from approximately 200,000 annual cases in the prevaccine era
(13) to a low of 1,010 cases reported in
1976 (Figure). Since then, a steady increase in the number of reported cases has occurred; reports of cases among adults
and adolescents have increased disproportionately
(72,74,75). In 2004, 25,827 cases of pertussis were reported to the CDC
(9), the highest number since 1959. Adults aged 19--64 years
accounted for 7,008 (27%) cases (9). The increase in
nationally reported cases of pertussis during the preceding 15 years might reflect a true increase in the burden of pertussis among adults or
the increasing availability and use of PCR to confirm cases and increasing clinician awareness and reporting of pertussis
Pertussis activity is cyclical with periodic increases every 3--4 years
(76,77). The typical periodicity has been less evident
in the last several years. However, during 2000--2004, the annual incidence of pertussis from national reports in different
states varied substantially by year among adults aged 19--64 years (Table 5). The number of reports and the incidence of
pertussis among adults also varied considerably by state, a reflection of prevailing pertussis activity and state surveillance systems
and reporting practices (72).
Serosurveys and Prospective Studies
In contrast to passively reported cases of pertussis, serosurveys and prospective population-based studies demonstrate
that B. pertussis infection is relatively common among adults with acute and prolonged cough illness and is even more
common when asymptomatic infections are considered. These studies documented higher rates of pertussis than those derived
from national passive surveillance reports in part because some diagnostic or confirmatory laboratory tests were available only
in the research setting and because study subjects were tested for pertussis early in the course of their cough illness when
recovery of B. pertussis is more likely. These studies provide evidence that national passive reports of adult pertussis constitute only
a small fraction (approximately 1%--2%) of illness among adults caused by
B. pertussis (78).
During the late 1980s and early 1990s, studies using serologic diagnosis of
B. pertussis infection estimated rates of recent
B. pertussis infection between 8%--26% among adults with cough illness of at least 5 days duration who sought medical
care (79--84). In a serosurvey conducted over a 3-year period among elderly adults, serologically defined episodes of
infection occurred at a rate of 3.3--8.0 per 100 person-years, depending on diagnostic criteria
(85). The prevalence of recent B.
pertussis infection was an estimated 2.9% among participants aged 10--49 years in a nationally representative sample of the
U.S. civilian, noninstitutionalized population
(86). Another study determined infection rates among healthy persons aged
15--65 years to be approximately 1% during 11-month period
(87). The proportion of B. pertussis infections that are symptomatic
in studies was between 10%--70% depending on the setting, the population, and diagnostic criteria employed
Four prospective, population-based studies estimate the annual incidence of pertussis among adults in the United
States (Table 6). Two were conducted in health maintenance organizations (HMO)
(83,84), one determined the annual incidence
of pertussis among subjects enrolled in the control arm of a clinical trial of acellular pertussis vaccine
(28), and one was conducted among university students
(80). From a reanalysis of the database of the Minnesota HMO study, the
annual incidence of pertussis by decade of age on the basis of 15 laboratory-confirmed cases of pertussis was 229 (CI = 0--540),
375 (CI = 54--695) and 409 (CI = 132--686) per 100,000 population for adults aged 20--29, 30--39, and 40--49
years, respectively (CDC, unpublished data, 2005). When applied to the U.S. population, estimates from the three
prospective studies suggest the number of cases of symptomatic pertussis among adults aged 19--64 years could range from 299,000
to 626,000 cases annually in the United States
Pertussis Outbreaks Involving Adults
Pertussis outbreaks involving adults occur in the community and the workplace. During an outbreak in Kent
County, Michigan in 1962, the attack rate among adults aged
>20 years in households with at least one case of pertussis was
21%; vulnerability to pertussis appeared unrelated to previous vaccination or history of pertussis in childhood
(3). In a statewide outbreak in Vermont in 1996, a total of 65 (23%) of 280 cases occurred among adults aged
>20 years (90); in a 2003
Illinois outbreak, 64 (42%) of 151 pertussis cases occurred among adults aged
>20 years (91). Pertussis outbreaks are
regularly documented in schools and health-care settings and occasionally in other types of workplaces (e.g., among employees of an
oil refinery ). In school outbreaks, the majority of cases occur among students. However, teachers who are exposed
to students with pertussis also are infected
(90,93,94). In a Canadian study, teachers were at approximately a fourfold higher
for pertussis compared with the general population during a period when high rates of pertussis occurred among
Background: Tetanus and Diphtheria
Tetanus is unique among diseases for which vaccination is routinely recommended because it is
noncommunicable. Clostridium tetani spores are ubiquitous in the environment and enter the body through nonintact skin. When
inoculated into oxygen-poor sites, such as necrotic tissue that can result from blunt trauma or deep puncture wounds,
C. tetani spores germinate to vegetative bacilli that multiply and elaborate tetanospasmin, a potent neurotoxin. Generalized tetanus
typically presents with trismus (lockjaw), followed by generalized rigidity caused by painful contractions of the skeletal muscles
that can impair respiratory function. Glottic spasm, respiratory failure, and autonomic instability can result in death
(95). During 1998--2000, the case-fatality ratio for reported tetanus was 18% in the United States
Following the introduction and widespread use of tetanus toxoid vaccine in the United States, tetanus became
uncommon. From 1947, when national reporting began, through 1998--2000, the incidence of reported cases declined from 3.9 to
0.16 cases per million population (96,97). Older adults have a disproportionate burden of illness from tetanus. During
1990--2001, a total of 534 cases of tetanus were reported; 301 (56%) cases occurred among adults aged 19--64 years and 201
(38%) among adults aged >65 years (CDC, unpublished data, 2005). Data from a national population-based serosurvey
conducted in the United States during 1988--1994 indicated that the prevalence of immunity to tetanus, defined as a tetanus
antitoxin concentration of >0.15 IU/mL, was >80% among adults aged 20--39 years and declined with increasing age.
Forty-five percent of men and 21% of women aged
>70 years had protective levels of antibody to tetanus
(98). The low prevalence of immunity and high proportion of tetanus cases among older adults might be related to the high proportion of older
adults, especially women, who never received a primary series
Neonatal tetanus usually occurs as a result of
C. tetani infection of the umbilical stump. Susceptible infants are born
to mothers with insufficient maternal tetanus antitoxin concentration to provide passive protection
(95). Neonatal tetanus is rare in the United States. Three cases were reported during 1990--2004 (CDC, unpublished data, 2005). Two of the infants
were born to mothers who had no dose or only one dose of a tetanus toxoid-containing vaccine
(99,100); the vaccination history of the other mother was unknown (CDC, unpublished data, 2005). Well-established evidence supports the
recommendation for tetanus toxoid vaccine during pregnancy for previously unvaccinated women
(33,95,103--105). During 1999, a global maternal and neonatal tetanus elimination goal was adopted by the World Health Organization, the United
Nations Children's Fund, and the United Nations Population Fund
Respiratory diphtheria is an acute and communicable infectious illness caused by strains of
Corynebacterium diphtheriae and rarely by other corynebacteria (e.g.,
C. ulcerans) that produce diphtheria toxin; disease caused by
C. diphtheriae and other corynebacteria are preventable through vaccination with diphtheria toxoid-containing vaccines. Respiratory diphtheria
is characterized by a grayish colored, adherent membrane in the pharynx, palate, or nasal mucosa that can obstruct the
airway. Toxin-mediated cardiac and neurologic systemic complications can occur
Reports of respiratory diphtheria are rare in the United States
(107,108). During 1998--2004, seven cases of
respiratory diphtheria were reported to CDC
(9,10). The last culture-confirmed case of respiratory diphtheria caused
by C. diphtheriae in an adult aged
>19 years was reported in 2000
(108). A case of respiratory diphtheria caused by
C. ulcerans in an adult was reported in 2005 (CDC, unpublished data, 2005). Data obtained from the national
population-based serosurvey conducted during 1988--1994 indicated that the prevalence of immunity to diphtheria, defined as a diphtheria antitoxin
concentration of >0.1 IU/mL, progressively decreased with age from 91% at age 6--11 years to approximately 30% by age 60--69 years
Adherence to the ACIP-recommended schedule of decennial Td boosters in adults is important to prevent sporadic cases
of respiratory diphtheria and to maintain population immunity
(33). Exposure to diphtheria remains possible during travel
to countries in which diphtheria is endemic (information available at www.cdc.gov/travel/diseases/dtp.htm), from
cases, or from rare endemic diphtheria toxin-producing
strains of corynebacteria other than C.
diphtheriae (106). The clinical management of diphtheria, including use of diphtheria antitoxin, and the public health response is reviewed
Adult Acellular Pertussis Vaccine Combined with Tetanus and
In the United States, one Tdap product is licensed for use in adults and adolescents.
ADACEL® (sanofi pasteur, Toronto, Ontario, Canada) was licensed on June 10, 2005, for use in persons aged 11--64 years as a single dose
active booster vaccination against tetanus, diphtheria, and pertussis
(11). Another Tdap product,
Rixensart, Belgium), is licensed for use in adolescents but not for use among persons aged
>19 years (20).
ADACEL® contains the same tetanus toxoid, diphtheria toxoid, and five pertussis antigens as those in
DAPTACEL® (pediatric DTaP), but
ADACEL® is formulated with reduced quantities of diphtheria toxoid and detoxified pertussis
toxin (PT). Each antigen is adsorbed onto aluminum phosphate. Each dose of
ADACEL® (0.5 mL) is formulated to contain 5
Lf [limit of flocculation unit] of tetanus toxoid, 2 Lf diphtheria toxoid, 2.5
µg detoxified PT, 5 µg filamentous
hemagglutinin (FHA), 3 µg pertactin (PRN), and 5
µg fimbriae types 2 and 3 (FIM). Each dose also contains aluminum phosphate
(0.33 mg aluminum) as the adjuvant, <5
µg residual formaldehyde, <50 ng residual glutaraldehyde, and 3.3 mg
2-phenoxyethanol (not as a preservative) per 0.5-mL dose.
ADACEL® contains no thimerosal.
ADACEL® is available in single dose vials
that are latex-free (11).
ADACEL® was licensed for adults on the basis of clinical trials demonstrating immunogenicity not inferior to
U.S.-licensed Td or pediatric DTaP
(DAPTACEL®, made by the same manufacturer) and an overall safety profile
clinically comparable with U.S.-licensed Td
(11,20). In a noninferiority trial, immunogenicity, efficacy, or safety endpoints
are demonstrated when a new product is at least as good as a comparator on the basis of a predefined and narrow margin for
a clinically acceptable difference between the study groups
(110). Adolescents aged 11--17 years also were studied; these
results are reported elsewhere (12,111,112
A comparative, observer-blinded, multicenter, randomized controlled clinical trial conducted in the United States
evaluated the immunogenicity of the tetanus toxoid, diphtheria toxoid , and pertussis antigens among adults aged 18--64
years (11,111,112). Adults were randomized 3:1 to receive a single dose of
ADACEL® or a single dose of U.S.-licensed
Td (manufactured by sanofi pasteur; contains tetanus toxoid [5 Lf] and
diphtheria toxoid [2 Lf]) (11,111). Sera from a subset
of persons were obtained before and approximately 1 month after vaccination
(11). All assays were performed at the
immunology laboratories of sanofi pasteur in Toronto, Ontario,
Canada, or Swiftwater, Pennsylvania, using validated methods
Adults aged 18--64 years were eligible for enrollment if they were in good health; adults aged
>65 years were not included in prelicensure studies. Completion of the childhood DTP/DTaP vaccination series was not required. Persons were
excluded if they had received a tetanus, diphtheria, or pertussis vaccine within 5 years; had a diagnosis of pertussis within 2 years;
had an allergy or sensitivity to any vaccine component; had a previous reaction to a tetanus, diphtheria, or pertussis
vaccine, including encephalopathy within 7 days or seizures within 3 days of vaccination; had an acute respiratory illness on the day
of enrollment; had any immunodeficiency, substantial underlying disease, or neurologic impairment; had daily use of
oral, nonsteroidal anti-inflammatory drugs; had received blood products or immunoglobulins within 3 months; or were
pregnant (11,112) (sanofi pasteur, unpublished data, 2005).
Tetanus and Diphtheria Toxoids
The efficacy of the tetanus toxoid and the diphtheria toxoid components of
ADACEL® was inferred from the immunogenicity of these antigens using established serologic correlates of protection
(95,105). Immune responses to tetanus and diphtheria antigens were compared between the
ADACEL® and Td groups, with 739--742 and 506--509
persons, respectively. One month postvaccination, the tetanus antitoxin seroprotective
(>0.1 IU/mL) and booster response rates
adults who received ADACEL® were noninferior to those who received Td. The seroprotective rate for tetanus was 100%
(CI = 99.5%--100%) in the ADACEL® group and 99.8% (CI = 98.9%--100%) in the
Td group. The booster response rate to tetanus* in the
ADACEL® group was 63.1% (CI = 59.5%--66.6%) and 66.8% (CI = 62.5%--70.9%) in the
Td group (11,111). One month postvaccination the diphtheria antitoxin seroprotective
(>0.1 IU/mL) and booster response rates* among adults who received a single dose of
ADACEL® were noninferior to those who received Td. The
seroprotective rate for diphtheria was 94.1% (CI = 92.1%--95.7%) in the
ADACEL® group and 95.1% (CI = 92.8%--96.8%) in the Td group.
The booster response rate to
diphtheria* in the
ADACEL® group was 87.4% (CI = 84.8%--89.7%) and 83.4% (CI = 79.9%--86.5%)
in the Td group (11,111).
In contrast to tetanus and diphtheria, no well-accepted serologic or laboratory correlate of protection for pertussis
exists (113). A consensus was reached at a 1997 meeting of the Vaccines and Related Biological Products Advisory
Committee (VRBPAC) that clinical endpoint efficacy studies of acellular pertussis vaccines among adults were not required for
Tdap licensure. Rather, the efficacy of the pertussis components of Tdap administered to adults could be inferred using a
serologic bridge to infants vaccinated with pediatric DTaP during clinical endpoint efficacy trials for pertussis
(114). The efficacy of the pertussis components of
ADACEL® was evaluated by comparing the immune responses (geometric mean
antibody concentration [GMC]) of adults vaccinated with a single dose of
ADACEL® to the immune responses of infants
vaccinated with 3 doses of
DAPTACEL® in a Swedish vaccine efficacy trial during the 1990s
(11,115). ADACEL® and
DAPTACEL® contain the same five pertussis antigens, except
ADACEL® contains one fourth the quantity of detoxified PT
in DAPTACEL® (116). In the Swedish trial, efficacy
of 3 doses of DAPTACEL® against World Health
Organization-defined pertussis (>21 days of paroxysmal cough with confirmation of
B. pertussis infection by culture and serologic testing or
an epidemiologic link to a household member with culture-confirmed pertussis) was 85% (CI = 80%--89%)
(11,115). The percentage of persons with a booster response to vaccine pertussis antigens exceeding a predefined lower limit for
an acceptable booster response also was evaluated. The
anti-PT, anti-FHA, anti-PRN, and anti-FIM GMCs of adults 1
month after a single dose of ADACEL®
were noninferior to those of infants after 3 doses of
DAPTACEL® (Table 7) (11).
Booster response rates to the pertussis
antigens contained in
ADACEL® (anti-PT, anti-FHA, anti-PRN,
and anti-FIM) among 739 adults 1 month following administration of
ADACEL® met prespecified criteria for an acceptable response.
Booster response rates to pertussis antigens were: anti-PT, 84.4% (CI = 81.6%--87.0%); anti-FHA, 82.7% (CI =
79.8%--85.3%); anti-PRN, 93.8% (CI =
91.8%--95.4%); and anti-FIM 85.9% (CI = 83.2%--88.4%)
The primary adult safety study, conducted in the United States, was a randomized, observer-blinded, controlled study
of 1,752 adults aged 18--64 years who received a single dose of
ADACEL®, and 573 who received Td. Data on solicited
local and systemic adverse events were collected using standardized diaries for the day of vaccination and the next 14
consecutive days (i.e., within 15 days following vaccination)
Five adults experienced immediate events within 30 minutes of vaccination
(ADACEL® [four persons] and Td [one]);
all incidents resolved without sequelae. Three of these events were classified under nervous system disorders
(hypoesthesia/paresthesia). No incidents of syncope or anaphylaxis were reported
Solicited Local Adverse Events
Pain at the injection site was the most frequently reported local adverse event among adults in both vaccination
groups (Table 8). Within 15 days following vaccination, rates of any pain at the injection site were comparable among
adults vaccinated with ADACEL® (65.7%) and
Td (62.9%). The rates of pain, erythema, and swelling were noninferior in
the ADACEL® recipients compared with the Td recipients
(Table 8) (11,111). No case of whole-arm swelling was reported
in either vaccine group (112).
Solicited Systemic Adverse Events
The most frequently reported systemic adverse events during the 15 days following vaccination were headache,
generalized body aches, and tiredness (Table 9). The proportion of adults reporting fever
>100.4°F (38°C) following vaccination
were comparable in the ADACEL® (1.4%) and Td (1.1%) groups, and the noninferiority criterion for
ADACEL® was achieved. The rates of the other solicited systemic adverse events also were comparable between the
ADACEL® and Td groups (11).
Serious Adverse Events
Serious adverse events (SAEs) within 6 months after vaccination were reported among 1.9% of the vaccinated adults: 33
of 1,752 in the ADACEL® group and 11 of the 573 in the Td group
(111,116). Two of these SAEs were neuropathic events
in ADACEL® recipients and were assessed by the investigators as possibly related to vaccination. A woman aged 23 years
was hospitalized for a severe migraine with unilateral facial paralysis 1 day following vaccination. A woman aged 49 years
was hospitalized 12 days after vaccination for symptoms of radiating pain in her neck and left arm (vaccination arm);
nerve compression was diagnosed. In both cases, the symptoms resolved completely over several days
(11,111,112,116). One seizure event occurred in a woman aged 51 years 22 days after
ADACEL® and resolved without sequelae; study
investigators reported this event as unrelated to vaccination
(116). No physician-diagnosed Arthus reaction or case of
Guillian-Barré syndrome was reported in any
ADACEL® recipient, including the 1,184 adolescents in the adolescent primary safety
study (sanofi pasteur, unpublished data, 2005).
Comparison of Immunogenicity and Safety Results Among Age Groups
Immune responses to the antigens in
ADACEL® and Td in adults (aged 18--64 years) 1 month after vaccination
were comparable to or lower than responses in adolescents (aged 11--17 years) studied in the primary adolescent prelicensure
trial (111). All adults in three age strata (18--28, 29--48, 49--64 years) achieved a seroprotective antibody level for tetanus
after ADACEL®. Seroprotective responses to diphtheria following
ADACEL® were comparable among adolescents (99.8%)
and young adults aged 18--28 years (98.9%) but were lower among adults aged 49--64 years (85.4%)
(111). Generally, adolescents had better immune response to pertussis antigens than adults after receipt
of ADACEL®, although GMCs
in both groups were higher than those of infants vaccinated in the
DAPTACEL® vaccine efficacy trial. Immune response to
PT and FIM decreased with increasing age in adults; no consistent relation between immune responses to FHA or PRN and
age was observed (111).
Overall, local and systemic events after
ADACEL® vaccination were less frequently reported by adults than
adolescents. Pain, the most frequently reported adverse event in the studies, was reported by 77.8% of adolescents and 65.7% of
adults vaccinated with
ADACEL®. Fever was also reported more frequently by adolescents (5%) than adults (1.4%) vaccinated
with ADACEL® (11,111). In adults, a trend for decreased frequency of local adverse events in the older age groups was observed.
Simultaneous Administration of
ADACEL® with Other Vaccines
Trivalent Inactivated Influenza Vaccine
Safety and immunogenicity of
ADACEL® co-administered with trivalent inactivated influenza vaccine ([TIV]
Fluzone®, sanofi pasteur, Swiftwater, Pennsylvania) was evaluated in adults aged 19--64 years using methods similar to the
primary ADACEL® studies. Adults were randomized into two groups. In one group,
ADACEL® and TIV were administered simultaneously in different arms (N = 359). In the other group, TIV was administered first, followed by
ADACEL® 4--6 weeks later (N = 361).
The antibody responses (assessed 4--6 weeks after vaccination) to diphtheria, three pertussis antigens (PT, FHA, and
FIM), and all influenza antigens§ were noninferior in persons vaccinated simultaneously with
ADACEL® compared with those vaccinated sequentially (TIV first, followed by
ADACEL®).¶ For tetanus, the proportion of persons achieving
a seroprotective antibody level was noninferior in the simultaneous group (99.7%) compared with the sequential
group (98.1%). The booster response rate to tetanus in the simultaneous group (78.8%) was lower than the sequential
group (83.3%), and the noninferiority criterion for simultaneous vaccination was not met. The slightly lower proportion of
persons demonstrating a booster response to tetanus in the simultaneous group is unlikely to be clinically important because >98%
subjects in both group groups achieved seroprotective levels. The immune response to PRN pertussis antigen in
the simultaneous group did not meet noninferiority criterion when compared with the immune response in the sequential
group (111). The lower limit of the 90% CI on the ratio of the anti-PRN GMCs (simultaneous vaccination group divided by
the sequential vaccination group) was 0.61, and the noninferiority criterion was >0.67; the clinical importance of this finding
is unclear (111).
Adverse events were solicited only after
ADACEL® (not TIV) vaccination
(111). Within 15 days of vaccination, rates
of erythema, swelling, and fever were comparable in both vaccination groups (Table 10). However, the frequency of pain at
the ADACEL® injection site was higher in the simultaneous group (66.6%) than the sequential group (60.8%), and
the noninferiority for simultaneous vaccination was not achieved
Hepatitis B Vaccine
Safety and immunogenicity of
ADACEL® administered with hepatitis B vaccine was not studied in adults but
was evaluated among adolescents aged 11--14 years using methods similar to the primary
ADACEL® studies. Adolescents were randomized into two groups. In one group,
ADACEL® and hepatitis B vaccine (Recombivax
HB®, Merck and Co., White House Station, New Jersey) were administered simultaneously (N = 206). In the other group,
ADACEL® was administered first, followed by hepatitis B vaccine 4--6 weeks later (N = 204). No interference was observed in the immune responses
to any of the vaccine antigens when
ADACEL® and hepatitis B vaccine were administered simultaneously or sequentially**
Adverse events were solicited only after
ADACEL® vaccination (not hepatitis B vaccination)
(111). Within 15 days of vaccination, the reported rates of injection site pain (at the
ADACEL® site) and fever were comparable when
ADACEL® and hepatitis B vaccine were administered simultaneously or sequentially (Table 11). However, rates of erythema and swelling
at the ADACEL® injection site were higher in the simultaneous group, and noninferiority for simultaneous vaccination was
not achieved. Swollen and/or sore joints were reported in 22.5% of persons who received simultaneous vaccination, and in
17.9% of persons in the sequential group. The majority of joint complaints were mild in intensity with a mean duration of 1.8
Safety and immunogenicity of simultaneous administration of
ADACEL® with other vaccines were not evaluated
during prelicensure studies (11).
Safety Considerations for Adult Vaccination with Tdap
Tdap prelicensure studies in adults support the safety of
ADACEL® (11). However, sample sizes were insufficient to
detect rare adverse events. Enrollment criteria excluded persons who had received vaccines containing tetanus toxoid,
diphtheria toxoid, and/or pertussis components during the preceding 5 years
(111,112). Persons with certain neurologic conditions
were excluded from prelicensure studies. Therefore, in making recommendations on the spacing and administration sequence
of vaccines containing tetanus toxoid, diphtheria toxoid, and/or pertussis components and on vaccination of adults with
a history of certain neurologic conditions or previous adverse events after vaccination, ACIP considered data from a range
of pre- and postlicensure studies of Tdap and other vaccines containing these components. Safety data from the Vaccine
Adverse Event Reporting System (VAERS) and postlicensure studies are monitored on an ongoing basis and will facilitate detection
of potential adverse reactions following more widespread use of Tdap in adults.
Spacing and Administration Sequence of Vaccines Containing Tetanus
Toxoid, Diphtheria Toxoid, and Pertussis Antigens
Historically, moderate and severe local reactions following tetanus and diphtheria toxoid-containing vaccines have
been associated with older, less purified vaccines, larger doses of toxoid, and frequent dosing at short intervals
(117--122). In addition, high pre-existing antibody titers to tetanus or diphtheria toxoids in children, adolescents, and adults primed
with these antigens have been associated with increased rates for local reactions to booster doses of tetanus or diphtheria
toxoid-containing vaccines (119,122--124). Two adverse events of particular clinical interest, Arthus reactions and extensive
limb swelling (ELS), have been associated with vaccines containing tetanus toxoid, diphtheria toxoid, and/or pertussis
Arthus reactions (type III hypersensitivity reactions) are rarely reported after vaccination and can occur after tetanus
toxoid-containing or diphtheria toxoid-containing vaccines
(33,122,125--129; CDC, unpublished data, 2005). An Arthus reaction
is a local vasculitis associated with deposition of immune complexes and activation of complement. Immune complexes form
in the setting of high local concentration of vaccine antigens and high circulating antibody concentration
(122,125,126,130). Arthus reactions are characterized by severe pain, swelling, induration, edema, hemorrhage, and occasionally by local
necrosis. These symptoms and signs usually develop 4--12 hours after vaccination; by contrast, anaphylaxis (immediate type
I hypersensitivity reactions) usually occur within minutes of vaccination. Arthus reactions usually resolve without
sequelae. ACIP has recommended that persons who experienced an Arthus reaction after a dose of tetanus toxoid-containing
vaccine not receive Td more frequently than every 10 years, even for tetanus prophylaxis as part of wound management
Extensive Limb Swelling
ELS reactions have been described following the fourth or fifth dose of pediatric DTaP
(131--136), and ELS has been reported to VAERS almost as frequently following Td as following pediatric DTaP
(136). ELS is not disabling, is not often brought to medical attention, and resolves without complication within 4--7 days
(137). ELS is not considered a precaution or contraindication for Tdap
Interval Between Td and Tdap
ACIP has recommended a 10-year interval for routine administration of Td and encourages an interval of at least 5
years between the Td and Tdap dose for adolescents
(12,33). Although administering Td more often than every 10 years (5
years for some tetanus-prone wounds) is not necessary to provide protection against tetanus or diphtheria, administering a dose
of Tdap <5 years after Td could provide a health benefit by protecting against pertussis. Prelicensure clinical trials of
ADACEL® excluded persons who had received doses of a diphtheria or tetanus toxoid-containing vaccine during the preceding 5
The safety of administering a dose of Tdap at intervals <5 years after Td or pediatric DTP/DTaP has not been studied
in adults but was evaluated in Canadian children and adolescents
(139). The largest Canadian study was a
nonrandomized, open-label study of 7,001 students aged 7--19 years residing in Prince Edward Island. This study assessed the rates of
adverse events after ADACEL® and compared reactogenicity of
ADACEL® administered at year intervals of 2--9 years
(eight cohorts) versus >10 years after the last tetanus and diphtheria toxoid-containing vaccine (Td, or pediatric DTP or
DTaP). The 2-year interval was defined as >18 months to
<30 months. Vaccination history for type of pertussis vaccine(s)
received (pediatric DTP and DTaP) also was assessed. The number of persons assigned to cohorts ranged from 464 in the
2-year cohort to 925 in the 8-year cohort. Among the persons in the
2-year cohort, 214 (46%) received the last tetanus and
diphtheria toxoid-containing vaccine 18--23 months before
ADACEL®. Adverse event diary cards were returned for 85% of
study participants with a known interval; 90% of persons in the 2-year interval cohort provided safety data
Four SAEs were reported in the Prince Edward Island study; none were vaccine-related. No Arthus reaction was
reported. Rates of reported severe local adverse reactions, fever, or any pain were not increased in persons who received
ADACEL® at intervals <10 years. Rates of local reactions were not increased among persons who received 5 doses of pediatric DTP, with
or without Td (intervals of 2--3 years or 8--9 years).
Two smaller Canadian postlicensure safety studies in adolescents also showed acceptable safety when
ADACEL® was administered at intervals <5 years after tetanus and diphtheria toxoid-containing vaccines
(140,141). Taken together, these three Canadian studies support the safety of using
ADACEL® after Td at intervals <5 years. The largest study
suggests intervals as short as approximately 2 years are acceptably safe
(139). Because rates of local and systemic reactions after Tdap
in adults were lower than or comparable to rates in adolescents during U.S. prelicensure trials, the safety of using intervals
as short of approximately 2 years between Td and Tdap in adults can be inferred from the Canadian studies
Simultaneous and Nonsimultaneous Vaccination with Tdap and Diphtheria-Containing MCV4
Tdap and tetravalent meningococcal conjugate vaccine ([MCV4]
Menactra® manufactured by sanofi pasteur,
Swiftwater, Pennsylvania) contain diphtheria toxoid
(142,143). Each of these vaccines is licensed for use in adults, but MCV4 is
not indicated for active vaccination against diphtheria
(143). In MCV4, the diphtheria toxoid (approximately 48
µg) serves as the carrier protein that improves immune responses to meningococcal antigens. Precise comparisons cannot be made between
quantity of diphtheria toxoid in the vaccines; however, the amount in a dose of MCV4 is estimated to be comparable to
the average quantity in a dose of pediatric DTaP
(144). No prelicensure studies were conducted of simultaneous or
sequential vaccination with Tdap and MCV4. ACIP has considered the potential for adverse events following simultaneous
and nonsimultaneous vaccination with Tdap and MCV4
(12). ACIP recommends simultaneous vaccination with Tdap
and MCV4 for adolescents when both vaccines are indicated, and any sequence if simultaneous administration is not
feasible (12,138). The same principles apply to adult patients for whom Tdap and MCV4 are indicated.
Neurologic and Systemic Events Associated with Vaccines with
Pertussis Components or Tetanus Toxoid-Containing Vaccines
Vaccines with Pertussis Components
Concerns about the possible role of vaccines with pertussis components in causing neurologic reactions or
exacerbating underlying neurologic conditions in infants and children are long-standing
(16,145). ACIP recommendations to defer pertussis vaccines in infants with suspected or evolving neurological disease, including seizures, have been based primarily
on the assumption that neurologic events after vaccination (with whole cell preparations in particular) might complicate
the subsequent evaluation of infants' neurologic status
In 1991, the Institute of Medicine (IOM) concluded that evidence favored acceptance of a causal relation between
pediatric DTP vaccine and acute encephalopathy; IOM has not evaluated associations between acellular vaccines and neurologic
events for evidence of causality (128).
During 1993--2002, active surveillance in Canada failed to ascertain any
acute encephalopathy cases causally related to whole cell or acellular pertussis vaccines among a population administered
6.5 million doses of pertussis-containing vaccines
(146). In children with a history of encephalopathy not attributable to
another identifiable cause occurring within 7 days
after vaccination, subsequent doses of pediatric DTaP vaccines are
ACIP recommends that children with progressive neurologic conditions not be vaccinated with Tdap until the
condition stabilizes (1). However, progressive neurologic disorders that are chronic and stable (e.g., dementia) are more common
among adults, and the possibility that Tdap would complicate subsequent neurologic evaluation is of less clinical concern. As
a result, chronic progressive neurologic conditions that are stable in adults do not constitute a reason to delay Tdap; this is
in contrast to unstable or evolving neurologic conditions (e.g., cerebrovascular events and acute encephalopathic conditions).
Tetanus Toxoid-Containing Vaccines
ACIP considers Guillain-Barré syndrome
<6 weeks after receipt of a tetanus toxoid-containing vaccine a precaution
for subsequent tetanus toxoid-containing vaccines
(138). IOM concluded that evidence favored acceptance of a causal
relation between tetanus toxoid-containing vaccines and Guillain-Barré syndrome. This decision is based primarily on a single,
well-documented case report (128,147). A subsequent analysis of active surveillance data in both adult and pediatric
populations failed to demonstrate an association between receipt of a tetanus toxoid-containing vaccine and onset of
Guillain-Barré syndrome within 6 weeks following vaccination
A history of brachial neuritis is not considered by ACIP to be a precaution or contraindication for administration of
tetanus toxoid-containing vaccines
(138,149,150). IOM concluded that evidence from case reports and uncontrolled studies
involving tetanus toxoid-containing vaccines did favor a causal relation between tetanus toxoid-containing vaccines and brachial
neuritis (128); however, brachial neuritis is usually
self-limited. Brachial neuritis is considered to be a compensable event through
the Vaccine Injury Compensation Program (VICP).
Economic Considerations for Adult Tdap Use
The morbidity and societal cost of pertussis in adults is substantial. A study that retrospectively assessed the
economic burden of pertussis in children and adults in Monroe County, New York, during 1989--1994 indicated that,
economic costs were not identified separately by age group, 14 adults incurred an average of 0.8 outpatient visits and
0.2 emergency department visits per case
(151). The mean time to full recovery was 74 days. A prospective study in
Monroe Country, New York, during 1995--1996 identified six adult cases with an average societal cost of $181 per case
(152); one third was attributed to nonmedical costs. The mean time to full recovery was 66 days (range: 3--383 days). A study of
the medical costs associated with hospitalization in four states during 1996--1999 found a mean total cost of $5,310 in
17 adolescents and 44 adults (153). Outpatient costs and nonmedical costs were not considered in this study.
A study in Massachusetts retrospectively assessed medical costs of confirmed pertussis in 936 adults during 1998--2000
and prospectively assessed nonmedical costs in 203 adults during 2001--2003
(42). The mean medical and nonmedical cost
per case was $326 and $447, respectively, for a societal cost of $773. Nonmedical costs constituted 58% of the total cost
in adults. If the cost of antimicrobials to treat contacts and the cost of personal time were included, the societal cost could be
as high as $1,952 per adult case.
Cost-Benefit and Cost-Effectiveness Analyses of Adult Tdap Vaccination
Results of two economic evaluations that examined adult vaccination strategies for pertussis varied. A cost-benefit
analysis in 2004 indicated that adult pertussis vaccination would be cost-saving
(154). A cost-effectiveness analysis in 2005
indicated that adult pertussis vaccination would not be cost-effective
(155). The strategies and assumptions used in the two models
had two major differences. The universal vaccination strategy used in cost-benefit analysis was a one-time adult
booster administered to all adults aged
>20 years; the strategy used in the cost-effectiveness study was for decennial boosters over
the lifetime of adults. The incidence estimates used in the two models also differed. In the cost-benefit study, incidence
ranged from 159 per 100,000 population for adults aged 20--29 years to 448 for adults aged
>40 years. In contrast, the cost-effectiveness study used a conservative incidence estimate of 11 per 100,000 population based on enhanced surveillance
data from Massachusetts. Neither study made adjustments for a decrease in disease severity that might be associated with
increased incidence. Adult strategies might have appeared
cost-effective or cost-saving at high incidence because the distribution of
the severity of disease was assumed to the same regardless of incidence.
To address these discrepancies, the adult vaccination strategy was re-examined using the cost-effectiveness study
model (155,156). The updated analysis estimated the
cost-effectiveness of vaccinating adults aged 20--64 years with a single
Tdap booster and explored the impact of incidence and severity of disease on cost-effectiveness. Costs, health outcomes, and
cost-effectiveness were analyzed for a U.S. cohort of approximately 166 million adults aged 20--64 years over a 10-year
period. The revised analysis assumed an incremental vaccine cost of $20 on the basis of updated price estimates of Td and Tdap
in the private and public sectors, an incidence of adult pertussis ranging from 10--500 per 100,000 population, and
vaccine delivery estimates ranging from 57%--66% among adults on the basis of recently published estimates. Without an
adult vaccination program, the estimated number of adult pertussis cases over a 10-year period ranged from 146,000 at
an incidence of 10 per 100,000 population to 7.1 million at an incidence of 500 per 100,000 population. A one-time
adult vaccination program would prevent approximately 44% of cases over a 10-year period. The number of quality adjusted
life years (QALYs) saved by a vaccination program varied substantially depending on disease incidence. At a rate of 10
per 100,000 population, a vaccination program resulted in net loss of QALYs because of the disutility associated with
vaccine adverse events. As disease incidence increased, the benefits of preventing pertussis far outweighed the risks associated
with vaccine adverse events. The number of QALYs saved by the one-time adult strategy was approximately 104,000
(incidence: 500 per 100,000 population).
The programmatic cost of a one-time adult vaccination strategy would be $2.1 billion. Overall, the net cost of the
one-time adult vaccination program ranged from $0.5 to $2 billion depending on disease incidence. The cost per case prevented
ranged from $31,000 per case prevented at an incidence of 10 per 100,000 population to $160 per case prevented at an incidence
of 500 per 100,000 (Table 12). The cost per QALY saved ranged from "dominated" (where "No vaccination" is preferred) at
10 per 100,000 population to $5,000 per QALY saved at 500 per 100,000 population. On the basis of a benchmark of
$50,000 per QALY saved (157--159), an adult vaccination program became cost-effective when the incidence exceeded
120 per 100,000 population. When adjustments were made for severity of illness at high disease incidence, little impact was
observed on the overall cost-effectiveness of a vaccination program.
Similar results were obtained when program costs and benefits were analyzed over the lifetime of the adult cohort for
the one-time and decennial booster strategies
Implementation of Adult Tdap Recommendations
Routine Adult Tdap Vaccination
The introduction of Tdap for routine use among adults offers an opportunity to improve adult vaccine coverage and
to offer protection against pertussis, tetanus, and diphtheria. Serologic and survey data indicate that U.S. adults
are undervaccinated against tetanus and diphtheria, and that rates of coverage decline with increasing age
(98,160). Maintaining seroprotection against tetanus and diphtheria through adherence to ACIP-recommended boosters is important for adults
of all ages. ACIP has recommended that adults receive a booster dose of tetanus
toxoid-containing vaccine every 10 years, or as indicated for wound care, to maintain protective
levels of tetanus antitoxin, and that adults with uncertain history of
primary vaccination receive a 3-dose primary series
(33). Every visit of an adult to a health-care provider should be regarded as
an opportunity to assess the patient's vaccination status and, if indicated, to provide protection against tetanus, diphtheria,
and pertussis. Nationwide survey data indicate that although only 68% of family physicians and internists who see adult patients
for outpatient primary care routinely administer Td for health maintenance when indicated, 81% would recommend Tdap for
their adult patients (161).
Vaccination of Adults in Contact with Infants
Vaccinating adults aged <65 years with Tdap who have or who anticipate having close contact with an infant could
decrease the morbidity and mortality of pertussis among infants by preventing pertussis in the adult and thereby
preventing transmission to the infant. Administration of Tdap to adult contacts at least 2 weeks before contact with an infant is
optimal. Near peak antibody responses to pertussis vaccine antigens can be achieved with booster doses by 7 days postvaccination,
as demonstrated in a study in Canadian children after receipt of DTaP-IPV booster
The strategy of vaccinating contacts of persons at high risk to reduce disease and therefore transmission is used
with influenza. Influenza vaccine is recommended for household contacts and out-of-home caregivers of children aged
0--59 months, particularly infants aged 0--6 months, the pediatric group at greatest risk for influenza-associated
complications (162). A similar strategy for Tdap is likely to be acceptable to physicians. In a 2005 national survey, 62% of
obstetricians surveyed reported that obstetricians and adult primary-care providers should administer Tdap to adults anticipating contact
with an infant, if recommended by ACIP and the American College of Obstetricians and Gynecologists (ACOG)
Protecting women with Tdap before pregnancy also could reduce the number of mothers who acquire and
transmit pertussis to their infant. ACOG states that preconceptional vaccination of women to prevent disease in the offspring,
when practical, is preferred to vaccination of pregnant women
(164). Because approximately half of all pregnancies in the
United States are unplanned, targeting women of child-bearing age before they become pregnant for a dose of Tdap might be
the most effective strategy (165). Vaccinating susceptible women of childbearing age with measles, mumps, and rubella
vaccine also is recommended to protect the mother and to prevent transmission to the fetus or young infant
(166). Implementing preconception vaccination in general medical offices, gynecology outpatient care centers, and
family-planning clinics is essential to ensure the success of this preventive strategy.
If Tdap vaccine is not administered before pregnancy, immediate postpartum vaccination of new mothers is an
alternative. Rubella vaccination has been successfully administered postpartum. In studies in New Hampshire and other
sites, approximately 65% of rubella-susceptible women who gave birth received MMR postpartum
(167,168). In a nationwide survey, 78% of obstetricians reported that they would recommend Tdap for women during the postpartum hospital stay if
it were recommended (163). Vaccination before discharge from the hospital or birthing center, rather than at a follow-up
visit, has the advantage of decreasing the time when new mothers could acquire and transmit pertussis to their newborn.
Other household members, including fathers, should receive Tdap before the birth of the infant as recommended.
Mathematical modeling can provide useful information about the potential effectiveness of a vaccination strategy
targeting contacts of infants. One model evaluating different vaccine strategies in the United States suggested that
vaccinating household contacts of newborns, in addition to routine adolescent Tdap vaccination, could prevent 76% of cases in
infants aged <3 months (169). A second model from Australia estimated a 38% reduction in cases and deaths among infants
aged <12 months if both parents of the infant were vaccinated before the infant was discharged from the hospital
Vaccination of Pregnant Women
ACIP has recommended Td routinely for pregnant women who received the last tetanus toxoid-containing vaccine
>10 years earlier to prevent maternal and neonatal tetanus
(33,171). Among women vaccinated against tetanus, passive transfer
of antitetanus antibodies across the placenta during pregnancy protect their newborn from neonatal
As with tetanus, antibodies to pertussis antigens are passively transferred during pregnancy
(174,175); however, serologic correlates of protection against pertussis are not known
(113). Whether passive transfer of maternal antibodies to
pertussis antigens protects neonates against pertussis is not clear
(113,176); whether increased titers of passive antibody to
pertussis vaccine antigens substantially interfere with response to DTaP during infancy remains an important question
(177--179). All licensed Td and Tdap vaccines are categorized as Pregnancy Category
C agents by FDA. Pregnant women were
excluded from prelicensure trials, and animal reproduction studies have not been conducted for Td or Tdap
(111,180--183). Td and TT have been used extensively in pregnant women, and no evidence indicates use of tetanus and diphtheria
toxoids administered during pregnancy are teratogenic
Pertussis Among Health-Care Personnel
This section has been reviewed by and is supported by the Healthcare Infection Control Practices Advisory
Nosocomial spread of pertussis has been documented in various health-care settings, including hospitals and
emergency departments serving pediatric and adult patients
(186--189), out-patient clinics (CDC, unpublished data, 2005), nursing
homes (89), and long-term--care facilities
(190--193). The source case of pertussis has been reported as a patient
(188,194--196), HCP with hospital- or community-acquired pertussis
(192,197,198), or a visitor or family member
Symptoms of early pertussis (catarrhal phase) are indistinguishable from other respiratory infections and conditions.
When pertussis is not considered early in the differential diagnosis of patients with compatible symptoms, HCP and patients
are exposed to pertussis, and inconsistent use of face or nose and mouth protection during evaluation and delay in
isolating patients can occur
(187,188,197,200,202). One study described the diagnosis of pertussis being considered in an
HCP experiencing paroxysmal cough, posttussive emesis, and spontaneous pneumothorax, but only after an infant patient
was diagnosed with pertussis 1 month later and after three other HCP had been infected
(198). Pertussis among HCP and patients can result in substantial morbidity
(187,188,197,200,202). Infants who have nosocomial pertussis are at
substantial risk for severe and, rarely, fatal disease
Risk for Pertussis Among HCP
HCP are at risk for being exposed to pertussis in inpatient and outpatient pediatric facilities
(186--188,194--200,203,204) and in adult health-care facilities and settings including emergency departments
(196,202,205--207). In a survey of
infection-control practitioners from pediatric hospitals, 90% reported HCP exposures to pertussis over a 5-year period; at 11% of
the reporting institutions, a physician contracted the disease
(208). A retrospective study conducted in a Massachusetts
tertiary-care center with medical, surgical, pediatric, and obstetrical services during October
2003--September 2004 documented pertussis in 20 patients and three HCP, and pertussis exposure in approximately 300 HCP
(209). One infected HCP exposed 191 other persons, including co-workers and patients in a postanesthesia care unit. Despite aggressive investigation
and prophylaxis, a patient and the HCP's spouse were infected
In a California university hospital with pediatric services, 25 patients exposed 27 HCP over a 5-year period
(210). At a North Carolina teaching hospital during 2002--2005, a total of 21 pertussis patients exposed 72 unprotected HCP
(DJ Weber, Hospital Epidemiology and Occupational Health, University of North Carolina Health Care System,
personal communication, 2006). A Philadelphia children's hospital that tracked exposures during September 2003--April
2005 identified seven patients who exposed 355 unprotected HCP
(211). The exposed HCP included 163 nurses, 106
physicians, 42 radiology technicians, 29 respiratory therapists, and 15 others. Recent estimates suggest that up to nine HCP are
exposed on average for each case of pertussis with delayed diagnosis
Serologic studies among hospital staff suggest
B. pertussis infection among HCP is more frequent than suggested by
the attack rates of clinical disease
(212,213). In one study, annual rates of infection among a group of clerical HCP with
minimal patient contact ranged from 4%--43% depending on the serologic marker used (4%--16% based on anti-PT IgG
(208). The seroprevalence of pertussis agglutinating antibodies among HCPs in one hospital outbreak correlated with
the degree of patient contact. Pediatric house staff and ward nurses were 2--3 times more likely to have
B. pertussis agglutinating antibodies than nurses with administrative responsibilities, 82% and 71% versus 35%, respectively
(197). In another study, the annual incidence
of B. pertussis infection among emergency department staff was approximately three times higher
than among resident physicians (3.6% versus 1.3%, respectively), on the basis of elevated anti-PT IgG titers. Two of five
HCP (40%) with elevated anti-PT IgG titers had clinical signs of pertussis
The risk for pertussis among HCP relative to the general population was estimated in a Quebec study of adult
and adolescent pertussis. Among the 384 (58%) of 664 eligible cases among adults aged
>18 years (41), HCP accounted for
32 (8%) of the pertussis cases and 5% of the population. Pertussis among HCP was 1.7 times higher than among the
general population. Similar studies have not been conducted in the United States.
Pertussis outbreaks are reported from chronic-care or nursing home facilities and in residential-care institutions; these HCP
might be at increased risk for pertussis. However, the risk for pertussis among HCP in these settings compared with the general
population has not been evaluated (190--193).
Management of Exposed Persons in Settings with Nosocomial Pertussis
Investigation and control measures to prevent pertussis after unprotected exposure in health-care settings are
labor intensive, disruptive, and costly, particularly when the number of exposed contacts is large
(203). Such measures include identifying contacts among HCP and patients, providing postexposure prophylaxis for asymptomatic close contacts,
and evaluating, treating, and placing symptomatic HCP on administrative leave until they have received effective
treatment. Despite the effectiveness of control measures to prevent further transmission of pertussis, one or more cycle of
transmission with exposures and secondary cases can occur before pertussis is recognized. This might occur regardless of whether
the source case is a patient or HCP, the age of the source case, or the setting (e.g., emergency department
, postoperative suite or surgical ward
[209,214], nursery [198,215], in-patient ward
[187,194,216], or maternity ambulatory care
). The number of reported outbreak-related secondary cases ranges
from none to approximately 80 per index case and
includes other HCP (205), adults
(209), and pediatric patients (203). Secondary cases among infants have resulted in prolonged
hospital stay, mechanical ventilation (198), or death
The cost of controlling nosocomial pertussis is high, regardless of the size of the outbreak. The impact of pertussis
on productivity can be substantial, even when no secondary case of pertussis occurs. The hospital costs result from
infection prevention and control/occupational health employee time to identify and notify exposed patients and personnel, to
educate personnel in involved areas, and to communicate with HCP and the public; from providing prophylactic antimicrobial
agents for exposed personnel; laboratory testing and treating symptomatic contacts; placing symptomatic personnel
on administrative leave; and lost time from work for
Cost-Benefit of Vaccinating Health-Care Personnel with Tdap
By vaccinating HCP with Tdap and reducing the number of cases of pertussis among HCP, hospitals will reduce the
costs associated with resource-intensive hospital investigations and control measures (e.g., case/contact tracking,
postexposure prophylaxis, and treatment of hospital acquired pertussis cases). These costs can be substantial. In four recent
hospital-based pertussis outbreaks, the cost of controlling pertussis ranged from $74,870--$174,327 per outbreak
(203,207). In a Massachusetts hospital providing pediatric, adult, and obstetrical care, a prospective study found that the cost of
managing pertussis exposures over a 12-month period was $84,000--$98,000
(209). Similarly, in a Philadelphia pediatric hospital,
the estimated cost of managing unprotected exposures over a 20-month period was $42,900
(211). Vaccinating HCP could be cost-beneficial
for health-care facilities if vaccination reduces nosocomial infections and outbreaks, decreases
transmission, and prevents secondary cases. These cost savings would be realized even with no change in the guidelines for investigation
and control measures.
A model to estimate the cost of vaccinating HCP and the net return from preventing nosocomial pertussis was
constructed using probabilistic methods and a hypothetical cohort of 1,000 HCP followed for 10 years. Data from the literature
were used to determine baseline assumptions. The annual rate of pertussis infection among HCP was approximately 7% on
the basis of reported serosurveys (212,213); of these, 40% were assumed to be symptomatic
(213). The ratio of identified exposures per HCP case was estimated to be nine
(187,199,202,206), and the cost of infection-control measures per
exposed person was estimated to be $231
(187,203,209). Employment turnover rates were estimated to be 17%
vaccine effectiveness was 71% over 10 years
(28,155), vaccine coverage was 66%
(160), the rate of anaphylaxis following vaccination was 0.0001%
(42,219,220), and the costs of vaccine was $30 per dose
(155,221). For each year, the number of nosocomial pertussis exposures requiring investigation and control interventions was calculated for two scenarios: with
or without a vaccination program for HCP having direct patient contact.
In the absence of vaccination, approximately 203 (range: 34--661) nosocomial exposures would occur per 1,000
HCP annually. The vaccination program would prevent 93 (range: 13--310) annual nosocomial pertussis exposures per 1,000
HCP per year. Over a 10-year period, the cost of infection control without vaccination would be $388,000; with a
Tdap vaccination program, the cost of infection control would be $213,000. The Tdap vaccination program for a stable
population of 1,000 HCP population over the same period would be $69,000. Introduction of a vaccination program would result in
an estimated median net savings of $95,000 and a benefit-cost ratio of 2.38 (range: 0.4--10.9) (i.e., for every dollar spent on
the vaccination program, the hospital would save $2.38 on control measures).
Implementing a Hospital Tdap Program
Infrastructure for screening, administering, and tracking vaccinations exists at occupational health or infection
prevention and control departments in most hospitals and is expected to provide the infrastructure to implement Tdap
vaccination programs. New personnel can be screened and vaccinated with Tdap when they begin employment. As Tdap
vaccination coverage in the general population increases, many new HCP will have already received a dose of Tdap.
To achieve optimal Tdap coverage among personnel in health-care settings, health-care facilities are encouraged to
use strategies that have enhanced HCP participation in other hospital vaccination campaigns. Successful strategies for
hospital influenza vaccine campaigns have included strong proactive educational programs designed at appropriate educational
and language levels for the targeted HCP, vaccination clinics in areas convenient to HCP, vaccination at worksites, and
provision of vaccine at no cost to the HCP
(222--224). Some health-care institutions might favor a tiered approach to
Tdap vaccination, with priority given to HCP who have contact with infants aged <12 months and other vulnerable groups
Purchase and administration of Tdap for HCP is an added financial and operational burden for health-care facilities.
A cost-benefit model suggests that the cost of a Tdap vaccination program for HCP is offset by reductions in investigation
and control measures for pertussis exposures from HCP, in addition to the anticipated enhancement of HCP and patient
Pertussis Exposures Among HCP Previously Vaccinated with Tdap
Health-care facilities could realize substantial
cost-saving if exposed HCP who are already vaccinated against pertussis
with Tdap were exempt from control interventions
(225). The guidelines for control of pertussis in health-care settings
were developed before pertussis vaccine (Tdap) was available for adults
(68,226). Studies are needed to evaluate the effectiveness
of Tdap to prevent pertussis in vaccinated HCP, the duration of protection, and the effectiveness of Tdap in preventing
infected vaccinated HCP from transmitting B.
pertussis to patients and other HCP. Until studies define the optimal management
of exposed vaccinated HCP or a consensus of experts is developed, health-care facilities should continue
postexposure prophylaxis for vaccinated HCP who have unprotected exposure to pertussis.
Alternatively, each health-care facility can determine an appropriate strategy for managing exposed vaccinated HCP on
the basis of available human and fiscal resources and whether the patient population served is at risk for severe pertussis
if transmission were to occur from an unrecognized case in a vaccinated HCP. Some health-care facilities might
have infrastructure to provide daily monitoring of exposed vaccinated HCP for early symptoms of pertussis and for
instituting prompt assessment, treatment, and administrative leave if early signs or symptoms of pertussis develop. Daily monitoring
of HCP 21--28 days before beginning each work shift has been successful for vaccinated workers exposed to varicella
(227,228) and for monitoring the site of vaccinia (smallpox vaccine) inoculation
(229,230). Daily monitoring of pertussis-exposed
HCP who received Tdap might be a reasonable strategy for postexposure management, because the incubation period
of pertussis is up to 21 days and the minimal risk for transmission before the onset of signs and symptoms of pertussis. In considering
this approach, hospitals should maximize efforts to prevent transmission of
B. pertussis to infants or other groups of
vulnerable persons. Additional study is needed to determine the effectiveness of this control
The following recommendations for the use of Tdap
(ADACEL®) are intended for adults aged 19--64 years who have
not already received a dose of Tdap. Tdap is licensed for a single use only; prelicensure studies on the safety or efficacy
of subsequent doses were not conducted. After receipt of a single dose of Tdap, subsequent doses of tetanus- and
diphtheria toxoid-containing vaccines should follow guidance from previously published recommendations for the use of Td and
TT (33). Adults should receive a decennial booster with Td beginning 10 years after receipt of Tdap
(33). Recommendations for the use of Tdap
BOOSTRIX®) among adolescents are described elsewhere
(12). BOOSTRIX® is not licensed for use in adults.
1. Routine Tdap Vaccination
1-A. Recommendations for Use
1) Routine use: Adults aged 19--64 years should receive a single dose of Tdap to replace a single dose of Td
for active booster vaccination against tetanus, diphtheria, and pertussis if they received their last dose of Td
>10 years earlier. Replacing 1 dose of Td with Tdap will reduce the morbidity associated with pertussis in adults
and might reduce the risk for transmitting pertussis to persons at increased risk for pertussis and its complications.
2) Short interval between Td and Tdap: Intervals <10 years since the last Td may be used to protect
against pertussis. Particularly in settings with increased risk for pertussis or its complications, the benefit of using
a single dose of Tdap at an interval <10 years to protect against pertussis generally outweighs the risk for local
and systemic reactions after vaccination. The safety of an interval as short as approximately 2 years between Td
and Tdap is supported by a Canadian study; shorter intervals may be used (see Safety Considerations for
Adult Vaccination with Tdap).
For adults who require tetanus toxoid-containing vaccine as part of wound management, a single dose of
Tdap is preferred to Td if they have not previously received Tdap (see Tetanus Prophylaxis in Wound Management).
3) Prevention of pertussis among infants aged <12 months by vaccinating their adult contacts: Adults who have
or who anticipate having close contact with an infant aged <12 months (e.g., parents, grandparents aged <65
years, child-care providers, and HCP) should receive a single dose of Tdap at intervals <10 years since the last Td
to protect against pertussis if they have not previously received Tdap. Ideally, these adults should receive Tdap
at least 2 weeks before beginning close contact with the infant. An interval as short as 2 years from the last dose
of Td is suggested to reduce the risk for local and systemic reactions after vaccination; shorter intervals may
Infants aged <12 months are at highest risk for pertussis-related complications and hospitalizations
compared with older age groups. Young infants have the highest risk for death. Vaccinating adult contacts might
reduce the risk for transmitting pertussis to these infants (see Infant Pertussis and Transmission to Infants).
Infants should be vaccinated on-time with pediatric DTaP
When possible, women should receive Tdap before becoming pregnant. Approximately half of all pregnancies
in the United States are unplanned (165). Any woman of childbearing age who might become pregnant
is encouraged to receive a single dose of Tdap if she has not previously received Tdap (see Vaccination
Women, including those who are breastfeeding, should receive a dose of Tdap in the immediate
postpartum period if they have not previously received Tdap. The postpartum Tdap should be administered
before discharge from the hospital or birthing center. If Tdap cannot be administered before discharge, it should
be administered as soon as feasible.
Personnel§§: HCP in hospitals or ambulatory care
settings¶¶ who have direct patient
contact should receive a single dose of Tdap as soon as feasible if they have not previously received Tdap. Although
Td booster doses are routinely recommended at an interval of 10 years, an interval as short as 2 years from the
last dose of Td is recommended for the Tdap dose among these HCP. These HCP include but are not limited
physicians, other primary-care providers, nurses, aides, respiratory therapists, radiology technicians,
students (e.g., medical, nursing, and other), dentists, social workers, chaplains, volunteers, and dietary and
Other HCP (i.e., not in hospitals or ambulatory care settings or without direct patient contact) should receive
a single dose of Tdap to replace the next scheduled Td according to the routine recommendation at an interval
no greater than 10 years since the last Td. They are encouraged to receive the Tdap dose at an interval as short as
2 years following the last Td.
Vaccinating HCP with Tdap will protect them against pertussis and is expected to reduce transmission
to patients, other HCP, household members, and persons in the community. Priority should be given
to vaccination of HCP who have direct contact with infants aged <12 months (see Prevention of Pertussis
Among Infants Aged <12 Months by Vaccinating their Adult Contacts).
Hospitals and ambulatory-care facilities should provide Tdap for HCP and use approaches that
maximize vaccination rates (e.g., education about the benefits of vaccination, convenient access, and the provision of
Tdap at no charge) (see Implementing a Hospital Tdap Program).
Tdap is not licensed for multiple administrations. After receipt of Tdap, HCP should receive Td or TT
for booster immunization against tetanus and diphtheria according to previously published
Routine adult Tdap vaccination recommendations are supported by evidence from randomized controlled
clinical trials, a nonrandomized open-label trial, observational studies, and expert opinion
1-B. Dosage and Administration
The dose of Tdap is 0.5 mL, administered intramuscularly (IM), preferably into the deltoid muscle.
1-C. Simultaneous Vaccination with Tdap and Other Vaccines
If two or more vaccines are indicated, they should be administered during the same visit (i.e.,
simultaneous vaccination). Each vaccine should be administered using a separate syringe at a different anatomic site.
Certain experts recommend administering no more than two injections per muscle, separated by at least 1
inch. Administering all indicated vaccines during a single visit increases the likelihood that adults will
receive recommended vaccinations (138).
1-D. Preventing Adverse Events
The potential for administration errors involving tetanus toxoid-containing vaccines and other vaccines is
well documented (232--234). Pediatric DTaP vaccine formulations should not be administered to adults. Attention
to proper vaccination technique, including use of an appropriate needle length and standard routes of
administration (i.e., IM for Tdap) might minimize the risk for adverse events
1-E. Record Keeping
Health-care providers who administer vaccines are required to keep permanent vaccination records of
vaccines covered under the National Childhood Vaccine Injury Compensation Act; ACIP has recommended that
this practice include all vaccines (138). Encouraging adults to maintain a personal vaccination record is important
to minimize administration of unnecessary vaccinations. Vaccine providers can record the type of the
vaccine, manufacturer, anatomic site, route, and date of administration and name of the administering facility on
the personal record.
2. Contraindications and Precautions for Use of Tdap
Tdap is contraindicated for persons with a history of serious allergic reaction (i.e., anaphylaxis) to any
component of the vaccine. Because of the importance of tetanus vaccination, persons with a history of anaphylaxis
to components included in any Tdap or Td vaccines should be referred to an allergist to determine whether they
have a specific allergy to tetanus toxoid and can safely receive tetanus toxoid (TT) vaccinations.
Tdap is contraindicated for adults with a history of encephalopathy (e.g., coma or prolonged seizures)
not attributable to an identifiable cause within 7 days of administration of a vaccine with pertussis components.
This contraindication is for the pertussis components, and these persons should receive Td instead of Tdap.
2-B. Precautions and Reasons to Defer Tdap
A precaution is a condition in a vaccine recipient that might increase the risk for a serious adverse reaction
(138). The following are precautions for Tdap administration. In these situations, vaccine providers should evaluate
the risks for and benefits of administering Tdap. Guillain-Barré syndrome
<6 weeks after previous dose of a tetanus toxoid-containing vaccine. If a decision is
made to continue vaccination with tetanus toxoid, Tdap is preferred to Td if otherwise indicated.
Tdap vaccination should generally be deferred during the following situations:
--- Moderate or severe acute illness with or without fever. Defer Tdap vaccination until the acute illness resolves.
--- Unstable neurologic condition (e.g., cerebrovascular events and acute encephalopathic conditions) (see
Safety Considerations for Adult Vaccination with Tdap for a discussion of neurological
--- History of an Arthus reaction following a previous dose of a tetanus toxoid-containing and/or
diphtheria toxoid-containing vaccine, including MCV4 (see Safety Considerations for Adult Vaccination with Tdap
for description of Arthus reaction). Vaccine providers should review the patient's medical history to verify
the diagnosis of Arthus reaction and can consult with an allergist or immunologist. If an Arthus reaction was
likely, vaccine providers should consider deferring Tdap vaccination until at least 10 years have elapsed since the
last tetanus toxoid-containing and/or diphtheria toxoid-containing vaccine was received. If the Arthus reaction
was associated with a vaccine that contained diphtheria toxoid without tetanus toxoid (e.g., MCV4), deferring
Tdap or Td might leave the adult inadequately protected against tetanus. In this situation, if the last tetanus
toxoid-containing vaccine was administered
>10 years earlier, vaccine providers can obtain a serum tetanus
antitoxin level to evaluate the need for tetanus vaccination (tetanus antitoxin levels
>0.1 IU/mL are considered protective) or administer TT.
2-C. Not Contraindications or Precautions for Tdap
The following conditions are not contraindications or precautions for Tdap, and adults with these conditions
may receive a dose of Tdap if otherwise indicated. The conditions in italics are precautions for pediatric DTP/DTaP
but are not contraindications or precautions for Tdap vaccination in adults
(>40.5°C) within 48 hours after pediatric DTP/DTaP not attributable to another cause;
Collapse or shock-like state (hypotonic hyporesponsive episode) within 48 hours after pediatric DTP/DTaP;
Persistent crying lasting
>3 hours, occurring within 48 hours after pediatric DTP/DTaP;
Convulsions with or without fever, occurring within 3 days after pediatric DTP/DTaP;
Stable neurologic disorder, including well-controlled seizures, a history of seizure disorder that has resolved,
and cerebral palsy (See section, Safety Considerations for Adult Vaccination with Tdap);
Immunosuppression, including persons with human immunodeficiency virus (HIV). The immunogenicity of
Tdap in persons with immunosuppression has not been studied and could be suboptimal;
Intercurrent minor illness;
Use of antimicrobials;
History of an extensive limb swelling reaction following pediatric DTP/DTaP or Td that was not an
Arthus hypersensitivity reaction (see Safety Considerations for Adult Vaccination with Td section for descriptions of
ELS and Arthus reactions).
3. Special Situations for Tdap Use
3-A. Pertussis Outbreaks and Other Settings with Increased Risk for Pertussis or its Complications
During periods of increased community pertussis activity or during pertussis outbreaks, vaccine providers
might consider administering Tdap to adults at an interval <10 years since the last Td or TT if Tdap was not
previously received (see Spacing and Sequencing of Vaccines Containing Tetanus Toxoid, Diphtheria Toxoid, and
Pertussis Antigens). Postexposure chemoprophylaxis and other pertussis control guidelines, including guidelines for HCP,
are described elsewhere (see Management of Exposed Persons in Settings with Nosocomial Pertussis)
(168,226,235). The benefit of using a short interval also might be increased for adults with co-morbid medical conditions
(see Clinical Features and Morbidity Among Adults with Pertussis).
3-B. History of Pertussis
Adults who have a history of pertussis generally should receive Tdap according to the routine
recommendation. This practice is preferred because the duration of protection induced by pertussis is unknown (waning might
begin as early as 7 years after infection
) and because the diagnosis of pertussis can be difficult to confirm,
particularly with tests other than culture for B.
pertussis. Administering pertussis vaccine to persons with a history of
pertussis presents no theoretical safety concern.
3-C. Tetanus Prophylaxis in Wound Management
ACIP has recommended administering tetanus toxoid-containing vaccine and tetanus immune globulin (TIG)
as part of standard wound management to prevent tetanus (Table 14)
(33). Tdap is preferred to Td for adults vaccinated
>5 years earlier who require a tetanus toxoid-containing vaccine as part of wound management and
who have not previously received Tdap. For adults previously vaccinated with Tdap, Td should be used if a
tetanus toxoid-containing vaccine is indicated for wound care. Adults who have completed the 3-dose primary
tetanus vaccination series and have received a tetanus toxoid-containing vaccine <5 years earlier are protected
against tetanus and do not require a tetanus toxoid-containing vaccine as part of wound management.
An attempt must be made to determine whether a patient has completed the 3-dose primary tetanus
vaccination series. Persons with unknown or uncertain previous tetanus vaccination histories should be considered to have
had no previous tetanus toxoid-containing vaccine. Persons who have not completed the primary series might
require tetanus toxoid and passive vaccination with TIG at the time of wound management (Table 14). When both
TIG and a tetanus toxoid-containing vaccine are indicated, each product should be administered using a separate
syringe at different anatomic sites.
Adults with a history of Arthus reaction following a previous dose of a tetanus toxoid-containing vaccine should
not receive a tetanus toxoid-containing vaccine until >10 years after the most recent dose, even if they have a
wound that is neither clean nor minor. If the Arthus reaction was associated with a vaccine that contained
diphtheria toxoid without tetanus toxoid (e.g., MCV4), deferring Tdap or Td might leave the adult inadequately
protected against tetanus, and TT should be administered (see precautions for management options). In all circumstances,
the decision to administer TIG is based on the primary vaccination history for tetanus
3-D. Adults with History of Incomplete or Unknown Tetanus, Diphtheria, or Pertussis Vaccination
Adults who have never been vaccinated against tetanus, diphtheria, or pertussis (no dose of pediatric
DTP/DTaP/DT or Td) should receive a series of three vaccinations containing tetanus and diphtheria toxoids. The
preferred schedule is a single dose of Tdap, followed by a dose of Td >4 weeks after Tdap and another dose of Td
6--12 months later (171). However, Tdap can substitute for any one of the doses of Td in the 3-dose primary
series. Alternatively, in situations in which the adult probably received vaccination against tetanus and diphtheria
but cannot produce a record, vaccine providers may consider serologic testing for antibodies to tetanus and
diphtheria toxin to avoid unnecessary vaccination. If tetanus and diphtheria antitoxin levels are each
>0.1 IU/mL, previous vaccination with tetanus and diphtheria toxoid vaccine is presumed, and a single dose of Tdap is indicated.
Adults who received other incomplete vaccination series against tetanus and diphtheria should be vaccinated
with Tdap and/or Td to complete a 3-dose primary series of tetanus and diphtheria toxoid-containing vaccines. A
single dose of Tdap can be used in the series.
3-E. Nonsimultaneous Vaccination with Tdap and Other Vaccines, Including MCV4
Inactivated vaccines may be administered at any time before or after a different inactivated or live vaccine, unless
a contraindication exists (138). Simultaneous administration of Tdap (or Td) and MCV4 (which all
contain diphtheria toxoid) during the same visit is preferred when both Tdap (or Td) and MCV4 vaccines are
indicated (12). If simultaneous vaccination is not feasible (e.g., a vaccine is not available), MCV4 and Tdap (or Td) can
be administered using any sequence. It is possible that persons who recently received one diphtheria
toxoid-containing vaccine might have increased rates for adverse reactions after a subsequent diphtheria-containing vaccine
when diphtheria toxoid antibody titers remain elevated from the previous vaccination (see Safety Considerations for
Adult Vaccination with Tdap).
3-F. Inadvertent Administration of Tdap
(BOOSTRIX®) or Pediatric DTaP
Of two licensed Tdap products, only
ADACEL® is licensed and recommended for use in adults.
BOOSTRIX® is licensed for persons aged 10--18 years and should not be administered to persons aged
>19 years. Pediatric DTaP is not indicated for persons aged >7 years. To help prevent inadvertent administration of
BOOSTRIX® or pediatric DTaP when
ADACEL® is indicated, vaccine providers should review product labels before administering
these vaccines; the packaging might appear similar. If
BOOSTRIX® or pediatric DTaP is administered to an adult
aged >19 years, this dose should count as the Tdap dose and the patient should not receive an additional dose of
Tdap (ADACEL®). The patient should be informed of any inadvertent vaccine administration.
Both Tdap products are licensed for active booster immunization as a single dose; neither are licensed for
multiple administrations. After receipt of Tdap, persons should receive Td for booster immunization against tetanus
and diphtheria, according to previously published guidelines
(33). If a dose of Tdap is administered to a person who
has previously received Tdap, this dose should count as the next dose of tetanus toxoid-containing vaccine.
3-G. Vaccination during Pregnancy
Recommendations for pregnant women will be published separately
(236). As with other inactivated vaccines
and toxoids, pregnancy is not considered a contraindication for Tdap vaccination
(138). Pregnant women who received the last tetanus toxoid-containing vaccine during the preceding 10 years and who have not previously received
Tdap generally should receive Tdap after delivery. In situations in which booster protection against tetanus and
diphtheria is indicated in pregnant women, the ACIP generally recommends Td. Providers should refer to
recommendations for pregnant women for further information
Because of lack of data on the use of Tdap in pregnant women, sanofi pasteur has established a pregnancy
registry. Health-care providers are encouraged to report Tdap
(ADACEL®) vaccination during pregnancy, regardless
of trimester, to sanofi pasteur (telephone: 800-822-2463).
3-H. Adults Aged >65 Years
Tdap is not licensed for use among adults aged >65 years. The safety and immunogenicity of Tdap among
adults aged >65 years were not studied during U.S. pre-licensure trials. Adults aged
>65 years should receive a dose of Td every 10 years for protection against tetanus and diphtheria and as indicated for wound management
Research on the immunogenicity and safety of Tdap among adults aged
>65 years is needed. Recommendations for use of Tdap in adults aged
>65 years will be updated as new data become available.
Reporting of Adverse Events After Vaccination
As with any newly licensed vaccine, surveillance for rare adverse events associated with administration of Tdap is
important for assessing its safety in large-scale use. The National Childhood Vaccine Injury Act of 1986 requires health-care providers
to report specific adverse events that follow tetanus, diphtheria, or pertussis vaccination
(http://vaers.hhs.gov/reportable.htm). All clinically significant adverse events should be reported to VAERS, even if causal relation to vaccination is not
apparent. VAERS reporting forms and information are available electronically at
http://www.vaers.org or by telephone
(800-822-7967). Web-based reporting is available and providers are encouraged to report electronically at
https://secure.vaers.org/VaersDataEntryintro.htm to promote better timeliness and quality of safety data.
Vaccine Injury Compensation
VICP, established by the National Childhood Vaccine Injury Act of 1986, is a system under which compensation can
be paid on behalf of a person thought to have been injured or to have died as a result of receiving a vaccine covered by
the program. The program is intended as an alternative to civil litigation under the traditional tort system because
negligence need not be proven.
The Act establishes 1) a Vaccine Injury Compensation Table that lists the vaccines covered by the program; 2) the
injuries, disabilities, and conditions (including death) for which compensation can be paid without proof of causation; and 3)
the period after vaccination during which the first symptom or substantial aggravation of the injury must appear. Persons can
be compensated for an injury listed in the established table or one that can be demonstrated to result from administration of
a listed vaccine. All tetanus toxoid-containing vaccines and vaccines with pertussis components (e.g., Tdap) are covered
under the act. Additional information about the program is available at
http://www.hrsa.gov/osp/vicp or by telephone
Areas of Future Research Related to Tdap and Adults
With recent licensure and introduction of Tdap for adults, close monitoring of pertussis trends and vaccine safety will
be priorities for public health organizations and health-care providers. Active surveillance sites in Massachusetts and
Minnesota, supported by CDC, are being established to provide additional data on the burden of pertussis among adults and the
impact of adult Tdap vaccination policy. Postlicensure studies and surveillance activities are planned or underway to evaluate
changes in the incidence of pertussis, the uptake of
Tdap, and the duration and effectiveness of Tdap vaccine. Further research is
needed to establish the safety and immunogencity of Tdap among adults aged >65 years and among pregnant women and their
infants; to evaluate the effectiveness of deferring prophylaxis among recently vaccinated health-care personnel exposed to pertussis;
to assess the safety, effectiveness and duration of protection of repeated Tdap doses; to develop improved diagnostic tests
for pertussis; and to evaluate and define immunologic correlates of protection for pertussis.
This report was prepared in collaboration with the Advisory Committee on Immunization Practices Pertussis Working Group.
We acknowledge our U.S. Food and Drug Administration colleagues, Theresa Finn, PhD, and Ann T. Schwartz, MD, for their review of
the Tdap product information, and our Massachusetts Department of Public Health colleagues, Susan M. Lett, MD and Arquimedes
Areche, MPH, for use of unpublished data. We also acknowledge the contributions of the following consultants who provided technical
expertise used in this report: William Atkinson, MD, Michael Decker, MD, Steve Gordon, MD, Scott Halperin, MD, Kashif Iqbal, MPH,
David Johnson, MD, Preeta Kutty, MD, Leonard Mermel, DO, Michele Pearson, MD, Mark Russi, MD, Pamela Srivastava, MS,
Larry Pickering, MD, Nancy Rosenstein Messonnier, MD, Benjamin Schwartz, MD, Sue Sebazco, David Weber, MD, Sandra Fitzler,
and Janice Zalen, MPA.
- CDC. Pertussis vaccination: Use of acellular pertussis vaccines among infants and young children. Recommendations of the Advisory
Committee on Immunization Practices (ACIP). MMWR 1997;46(RR-7).
- Jenkinson D. Duration of effectiveness of pertussis vaccine: evidence from a 10 year community study. BMJ 1988;296:612--4.
- Lambert H. Epidemiology of a small pertussis outbreak in Kent County, Michigan. Public Health Rep 1965;80:365--9.
- Liese JG, Stojanov S, Peters A, et al. Duration of efficacy after primary immunization with biken acellular pertussis vaccine.[Abstract G-2050].
In: Programs and Abstracts of the 43rd Interscience Conference on Antimicrobial Agents and Chemotherapy, Chicago, IL, September 14--17, 2003.
- Olin P, Gustafsson L, Barreto L, et al. Declining pertussis incidence in Sweden following the introduction of acellular pertussis vaccine.
- Salmaso S, Mastrantonio P, Tozzi AE, et al. Sustained efficacy during the first 6 years of life of 3-component acellular pertussis
vaccines administered in infancy: The Italian experience. Pediatrics 2001;108:81.
- Wendelboe AM, Van Rie A, Salmaso S, Englund JA. Duration of immunity against pertussis after natural infection or vaccination. Pediatr
Infect Dis J 2005;24:S58--S61.
- CDC. Final 2005 reports of notifiable diseases. MMWR 2006;55:880--1.
- CDC. Summary of notifiable diseases---United States, 2004. MMWR 2006;53:1--79.
- CDC. Summary of notifiable diseases---United States, 2003. MMWR 2005 22;52(No. 54).
- Food and Drug Administration. Product approval
information---licensing action, package insert: Tetanus Toxoid, Reduced Diphtheria Toxoid
and Acellular Pertussis Vaccine Adsorbed ADACEL. sanofi pasteur. Rockville, MD: US Department of Health and Human Services, Food and
Drug Administration, Center for Biologics Evaluation and Research; 2005. Available at
- CDC. Preventing tetanus, diphtheria, and pertussis among adolescents: use of tetanus toxoid, reduced diphtheria toxoid and acellular
pertussis vaccines: recommendations of the Advisory Committee on Immunization Practices (ACIP). MMWR 2006;55(No. RR-3).
- CDC. Annual summary 1979: reported morbidity and mortality in the United States. MMWR 1980;28:12--7.
- US Department of Health Education and Welfare. Vital statistics rates in the United States, 1900--1940 and 1940--1960; Vital Statistics Rates
in the United States; Washington DC: Public Health Service, National Center for Health Statistics, 1968. Public Health Service publication
- Lapin LH. Whooping cough. 1st ed. Springfield, IL: Charles C Thomas; 1943.
- Gordon J, Hood R. Whooping cough and its epidemiological anomalies. Am J Med Sci 1951;222:333--61.
- American Academy of Pediatrics. In: Toomey J, ed. Report of the Committee on Therapeutic Procedures for Acute Infectious Diseases and
on Biologicals of the American Academy of Pediatrics. Evanstown, Il: American Academy of Pediatrics; 1947.
- CDC. Pertussis vaccination: acellular pertussis vaccine for the fourth and fifth doses of the DTP series. Update to supplementary ACIP
Statement. Recommendations of the Advisory Committee on Immunization Practices (ACIP). MMWR 1992;41(No. RR-15).
- CDC. Pertussis vaccination: acellular pertussis vaccine for reinforcing and booster use---supplementary ACIP statement. Recommendations of
the Immunization Practices Advisory Committee (ACIP). MMWR 1992;41(No. RR-1).
- Food and Drug Administration. Product approval information---licensing action, package insert: BOOSTRIX®. Tetanus Toxoid, Reduced Diptheria Toxoid and Acellular Pertussis Vaccine, Adsorbed. GlaxoSmithKline Biologicals. Rockville, MD: US Department of Health and
Human Services, Food and Drug Administration, Center for Biologics Evaluation and Research; 2005. Available at
- Gilberg S, Njamkepo E, Du CIP, et al. Evidence of Bordetella pertussis infection in adults presenting with persistent cough in a French area
with very high whole-cell vaccine coverage. J Infect Dis 2002;186:415--8.
- Halperin SA. Canadian experience with implementation of an acellular pertussis vaccine booster-dose program in adolescents: implications for
the United States. Pediatr Infect Dis J. 2005;24:S141--S6.
- Tan T, Halperin S, Cherry JD, et al. Pertussis immunization in the Global Pertussis Initiative North American Region: recommended
strategies and implementation considerations. Pediatr Infect Dis J 2005;24:S83--S6.
- Von Konig C, Campins-Marti M, Finn A, Guiso N, Mertsola J, Liese JG. Pertussis immunization in the Global Pertussis Initiative
European Region: recommended strategies and implementation considerations. Pediatr Infect Dis J 2005;24:S87--S92.
- Public Health Agency of Canada. An Advisory Committee statement
(ACS), National Advisory Committee on Immunization (NACI):
Prevention of pertussis in adolescents and adults. Canada Communicable Disease Report 2003;29(No. ACS-5).
- Tan T, Trindade E, Skowronski D. Epidemiology of pertussis. Pediatr Infect Dis J 2005;24:S10--S8.
- National Health and Medical Research Council. The Australian Immunization Handbook. 8th ed. Canberra: Australian Government
Publishing Service; 2003.
- Ward JI, Cherry JD, Chang SJ, et al. Efficacy of an acellular pertussis vaccine among adolescents and adults. N Engl J Med 2005;353:1555--63.
- Hewlett EL. A commentary on the pathogenesis of pertussis. Clin Infect Dis 1999;28:S94--S8.
- Mattoo S, Cherry JD. Molecular pathogenesis, epidemiology, and clinical manifestations of respiratory infections due to
Bordetella pertussis and other Bordetella
subspecies. Clin Microbiol Rev 2005;18:326--82.
- Cherry JD. Epidemiological, clinical, and laboratory aspects of pertussis in adults. Clin Infect Dis 1999;28:S112--S117.
- Edwards KM, Decker MD. Pertussis vaccine. In: Plotkin S, Orenstein WA, eds. Vaccines. Philadelphia, PA: Saunders Co.; 2004.
- CDC. Diphtheria, tetanus, and pertussis: Recommendations for vaccine use and other preventive measures. Recommendations of
the Immunization Practices Advisory Committee (ACIP). MMWR 1991;40(No. RR-10).
- CDC. Recommended antimicrobial agents for the treatment and postexposure prophylaxis of pertussis: 2005 CDC Guidelines.
MMWR 2005;54(No. RR-14).
- Bortolussi R, Miller B, Ledwiht M, et al. Clinical course of pertussis in immunized children. Pediatr Infect Dis J 1995;14:870--4.
- Biellik RJ, Patriarca PA, Mullen JR, et al. Risk factors for community- and household-acquired pertussis during a large-scale outbreak in
central Wisconsin. J Infect Dis 1988;157:1134--41.
- Garner J, Hospital Infection Control Practices Advisory Committee. Guideline for isolation precautions in hospitals. Infect Control
Hosp Epidemiol 1996;17:53--80.
- Sprauer MA, Cochi SL, Zell ER, et al. Prevention of secondary transmission of pertussis in households with early use of erythromycin. Am J
Dis Child 1992;146:177--81.
- Steketee RW, Wassilak SG, Adkins SGF, et al. Evidence of a high attack rate and efficacy of erythromycin prophylaxis in a pertussis outbreak in
a facility for developmentally disabled. J Infect Dis 1988;157:434--40.
- Thomas PF, McIntyre PB, Jalaludin BB. Survey of pertussis morbidity in adults in western Sydney. Med J Aust 2000;173:74--6.
- De Serres G, Shadmani R, Duval B, et al. Morbidity of pertussis in adolescents and adults. J Infect Dis 2000;182:174--9.
- Lee GM, Lett S, Schauer S, et al. Societal costs and morbidity of pertussis in adolescents and adults. Clin Infect Dis 2004;39:1572--80.
- Trollfors B, Rabo E. Whooping cough in adults. BMJ 1981;283:696--7.
- Wright SW. Pertussis infection in adults. South Med J 1998;91:702--8.
- Shvartzman P, Mader R, Stopler T. Herniated lumbar disc associated with pertussis. J Fam Pract 1989;224--5.
- Skowronski DM, Buxton JA, Hestrin M, Keyes RD, Lynch K, Halperin SA. Carotid artery dissection as a possible severe complication of
pertussis in an adult: clinical case report and review. Clin Infect Dis 2003;36:1--4.
- Postels-Multani S, Schmitt HJ, Wirsing von Konig CH, Bock HL, Bogaerts H. Symptoms and complications of pertussis in adults.
- MacLean DW. Adults with pertussis. JR Coll Gen Pract 1982;32:298--300.
- Halperin SA, Marrie TJ. Pertussis encephalopathy in an adult: case report and review. Rev Infect Dis 1991;13:1043--7.
- Eidlitz-Markus T, Zeharia A. Bordetella
pertussis as a trigger of migraine without aura. Pediatr Neurol 2005;33:283--4.
- Schellekens J, von Konig CH, Gardner P. Pertussis sources of infection and routes of transmission in the vaccination era. Pediatr Infect Dis
- Colebunders R, Vael C, Blot K, Van MJ, Van den EJ, Ieven M. Bordetella pertussis as a cause of chronic respiratory infection in an AIDS
patient. Eur J Clin Microbiol Infect Dis 1994;13:313--5.
- Doebbeling BN, Feilmeier ML, Herwaldt LA. Pertussis in an adult man infected with the human immunodeficiency virus. J Infect
- CDC. Fatal case of unsuspected pertussis diagnosed from a blood culture---Minnesota, 2003. MMWR 2004;53:131--2.
- Vitek CR, Pascual FB, Baughman AL, Murphy TV. Increase in deaths from pertussis among young infants in the United States in the
1990s. Pediatr Infect Dis J 2003;22:628--34.
- Mertens PL, Stals FS, Schellekens JF, Houben AW, Huisman J. An epidemic of pertussis among elderly people in a religious institution in
The Netherlands. Eur J Clin Microbiol Infect Dis 1999;18:242--7.
- Preziosi M, Halloran M. Effects of pertussis vaccination on disease: vaccine efficacy in reducing clinical severity. Clin Infect Dis 2003;37:772--9.
- Bisgard KM, Pascual FB, Ehresmann KR, et al. Infant pertussis: who was the source? Pediatr Infect Dis J 2004;23:985--9.
- Lind-Brandberg L, Welinder-Olsson C, Laggergard T, Taranger J, Trollfors B, Zackrisson G. Evaluation of PCR for diagnosis of
Bordetella pertussis and Bordetella
parapertussis infections. J Clin Microbiol 1998;36:679--83.
- Cherry JD, Grimprel E, Guiso N, Heininger U, Mertsola J. Defining pertussis epidemiology: clinical, microbiologic and serologic
perspectives. Pediatr Infect Dis J 2005;24:S25--S34.
- Young S, Anderson G, Mitchell P. Laboratory observations during an outbreak of pertussis. Clinical Microbiology Newsletter 1987;9:176--9.
- Van der Zee A, Agterberg C, Peeters M, Mooi F, Schellekens J. A clinical validation of
Bordetella pertussis and Bordetella
parapertussis polymerase chain reaction: comparison with culture and serology using samples from patients with suspected whooping cough from a highly
immunized population. J Infect Dis 1996;174:89--96.
- Viljanen MK, Ruuskanen O, Granberg C, Salmi T. Serological diagnosis of pertussis: IgM, IgA and IgG antibodies against
Bordetella pertussis measured by enzyme-linked Immunosorbent Assay (ELISA). Scand J Infect Dis 1982;14:117--22.
- Hallander HO. Microbiological and serological diagnosis of pertussis. Clin Infect Dis 1999;28:S99--S106.
- Loeffelholz MJ, Thompson CJ, Long KS, Gilchrist MJR. Comparison of PCR, culture, and direct fluorescent-antibody testing for detection
of Bordetella pertussis. J Clin Microbiol 1999;37:2872--6.
- Lievano FA, Reynolds MA, Waring AL, et al. Issues associated with and recommendations for using PCR to detect outbreaks of pertussis. J
Clin Microbiol 2002;40:2801--5.
- He Q, Viljanen MK, Arvilommi H, Aittanen B, Mertsola J. Whooping cough caused by
Bordetella pertussis and Bordetella
parapertussis in an immunized population. JAMA 1998;280:635--7.
- CDC. Guidelines for the control of pertussis outbreaks. Atlanta, GA: US Department of Health and Human Services, CDC; 2000.
- Council of State and Territorial Epidemiologists. CSTE Position Statement 1997-ID-9: Public health surveillance, control and prevention
of pertussis. Atlanta, GA: Council of State and Territorial Epidemiologists, 1997.
- Marchant CD, Loughlin AM, Lett SM, et al. Pertussis in Massachusetts, 1981--1991: incidence, serologic diagnosis, and vaccine effectiveness.
J Infect Dis 1994;169:1297--305.
- Roush S, Birkhead G, Koo D, Cobb A, Fleming D. Mandatory reporting of diseases and conditions by health care professionals and
laboratories. JAMA 1999;282:164--70.
- CDC. Pertussis---United States, 2001--2003. MMWR 2005;54:1283--6.
- Tanaka M, Vitek CR, Pascual FB, Bisgard KM, Tate JE, Murphy TV. Trends in pertussis among infants in the United States, 1980--1999.
- Guris D, Strebel PM, Bardenheier B, et al. Changing epidemiology of pertussis in the United States: increasing reported incidence among
adolescents and adults, 1990--1996. Clin Infect Dis 1999;28:1230--7.
- Farizo KM, Cochi SL, Zell ER, Brink EW, Wassilak SG, Patriarca PA. Epidemiological features of pertussis in the United States, 1980--1989.
Clin Infect Dis 1992;14:708--19.
- Cherry JD. The science and fiction of the "resurgence" of pertussis. Pediatrics 2003;112:405--6.
- Broutin H, Guegan JF, Elguero E, Simondon F, Cazelles B. Large-scale comparative analysis of pertussis population dynamics:
periodicity, synchrony, and impact of vaccination. Am J Epidemiol 2005;161:1159--67.
- Cortese MM, Baughman AL, Brown K, Srivastava P. A new age in pertussis prevention---new opportunities through adult vaccination. Am J
Prev Med 2007 (In press).
- Jackson LA, Cherry JD, Wang SP, Grayston JT. Frequency of serological evidence of
Bordetella infections and mixed infections with
other respiratory pathogens in university students with cough illnesses. Clin Infect Dis 2000;31:3--6.
- Mink CM, Cherry JD, Christenson P, et al. A search for Bordetella pertussis infection in university students. Clin Infect Dis 1992;14:464--71.
- Rosenthal S, Strebel P, Cassiday P, Sanden G, Brusuelas K, Wharton M. Pertussis infection among adults during the 1993 outbreak in Chicago.
J Infect Dis 1995;171:1650--2.
- Jansen DL, Gray GC, Putnam SD, Lynn F, Meade BD. Evaluation of pertussis in U.S. Marine Corps trainees. Clin Infect Dis 1997;25:1099--107.
- Nennig ME, Shinefield HR, Edwards KM, Black SB, Fireman BH. Prevalence and incidence of adult pertussis in an urban population.
JAMA 1996 5;275:1672--4.
- Strebel P, Nordin J, Edwards K, et al. Population-based incidence of pertussis among adolescents and adults, Minnesota, 1995--1996. J Infect
- Hodder SL, Cherry JD, Mortimer Jr EA, Ford AB, Gornbein J, Papp K. Antibody responses to Bordetella pertussis antigens and
clinical correlations in elderly community residents. Clin Infect Dis 2000;31:7--14.
- Baughman AL, Bisgard KM, Edwards KM, et al. Establishment of diagnostic cutoff points for levels of serum antibodies to pertussis toxin,
filamentous hemagglutinin, and fimbriae in adolescents and adults in the United States. Clin Diagn Lab Immunol 2004;11:1045--53.
- Ward JI, Cherry JD, Chang SJ, et al. Bordetella pertussis infections in vaccinated and unvaccinated adolescents and adults, as assessed in a
national prospective randomized acellular pertussis vaccine trial (APERT). Clin Infect Dis 2006;43:151--7.
- Long SS, Welkon CJ, Clark JL. Widespread silent transmission of pertussis in families: antibody correlates of infection and symptomatology.
J Infect Dis 1990;161:480--6.
- Addiss DG, Davis JP, Meade BD, et al. A pertussis outbreak in a Wisconsin nursing home. J Infect Dis 1991;164:104--110.
- CDC. Pertussis outbreak---Vermont, 1996. MMWR 1997;46:822--6.
- Dworkin MS. An outbreak of pertussis demonstrating a substantial proportion of cases with post-tussive vomiting and whooping in
adolescents and adults. Boston, MA: Infectious Disease Society of America, 42nd Meeting September 30--October 3, 2004.
- CDC. Pertussis outbreak among adults at an oil refinery---Illinois, August--October 2002. MMWR 2003;52:1--4.
- CDC. Pertussis outbreaks---Massachusetts and Maryland, 1992. MMWR 1993;42:197--200.
- CDC. School-associated pertussis outbreaks---Yavapai County, Arizona, September 2002--February 2003. MMWR 2004;53:216--219.
- Wassilak SG, Roper M.H., Murphy TV, Orenstein WA. Tetanus toxoid. In: Plotkin S, Orenstein WA, eds. Vaccines. 4th ed. Philadelphia,
PA: Saunders Co.; 2004.
- CDC. Tetanus surveillance---United States, 1998--2000. MMWR 2003;52:1--8.
- Srivastava P, Brown K, Chen J, Kretsinger K, Roper M.H. Trends in tetanus epidemiology in the United States, 1972--2001. Workshop 27.
39th National Immunization Conference, Washington, DC. March 21--24, 2005.
- McQuillan G, Kruszon-Moran D, Deforest A, Chu S, Wharton M. Serologic immunity to diphtheria and tetanus in the United States. Ann
Intern Med 2002;136:660--6.
- Craig A, Reed G, Mohon R, et al. Neonatal tetanus in the United States: a sentinel event in the foreign-born. Pediatr Infect Dis J 1997;16:955--9.
- CDC. Neonatal tetanus---Montana, 1998. MMWR 1998;47:928--30.
- Newell KW, Duenas LA, LeBlanc DR, Garces Osorio N. The use of toxoid for the prevention of tetanus neonatorum: final report of a
double-blind controlled field trial. Bull World Health Organ 1966;35:863--71.
- Newell KW, LeBlanc DR, Edsall G, et al. The serological assessment of a tetanus toxoid field trial. Bull World Health Organ 1971;45:773--85.
- Galazka A. The immunological basis for immunization series---module 4: pertussis. Geneva, Switzerland: World Health Organization;
- World Health Organization. Maternal and neonatal tetanus elimination by 2005: strategies for achieving and maintaining elimination
Geneva: World Health Organization, UNICEF, UNFPA; 2000.
- Wharton M, Vitek CR. Diphtheria toxoid. In: Plotkin S, Orenstein WA, eds. Vaccines. Philadelphia, PA: Saunders Co.; 2004.
- CDC. Manual for the Surveillance of Vaccine-Preventable Diseases. 3rd ed. Available at
- CDC. Toxigenic Corynebacterium
diphtheriae---Northern Plains Indian Community, August--October 1996. MMWR 1997;46:506--10.
- CDC. Fatal respiratory diphtheria in a U.S. traveler to Haiti---Pennsylvania, 2003. MMWR 2004;52:1285--6.
- CDC. Availability of diphtheria antitoxin through an investigational new drug protocol. MMWR 2004;53:413.
- Food and Drug Administration. International Conference on Harmonization: Guidance on Statistical Principles for Clinical Trials. Federal
- Food and Drug Administration. Vaccines and Related Biological Products Advisory Committee, March 15, 2005: FDA clinical briefing
document for tetanus toxoid, reduced diphtheria toxoid and acellular pertussis vaccine adsorbed (Tdap, ADACEL), Aventis Pasteur, Limited.
Rockville, MD: US Department of Health and Human Services, Food and Drug Administration; 2005. Available at
- Pichichero ME, Rennels MB, Edwards KM, et al. Combined tetanus, diphtheria, and 5-component pertussis vaccine for use in adolescents
and adults. JAMA 2005;293:3003--11.
- Cherry J, Gornbein J, Heininger U, Stehr K. A search for seologic correlates of immunity to
Bordetella pertussis cough illnesses.
- Food and Drug Administration. Vaccines and Related Biological Products Advisory Committee meeting. Bethesda, MD: June 5,1997. Avaliable
- Gustafsson L, Hallander HO, Olin P, Reizenstein E, Storsaeter J. A controlled trial of a two-component acellular, a five- component acellular, and a
whole-cell pertussis vaccine. N Engl J Med 1996;334:349--55.
- Food and Drug Administration. sanofi pasteur. ADACEL briefing document. Bethesda, MD: US Department of Health and Human
Services, Food and Drug Administration, Center for Biologic Evaluation and Research; March 15, 2005. Available at
- Bjorkholm B, Granstrom M, Wahl M, Hedstrom CE, Hagberg L. Adverse reactions and immunogenicity in adults to regular and increased
dosage of diphtheria vaccine. Eur J Clin Microbiol 1987;6:637--40.
- Edsall G, Altman JS, Gaspar AJ. Combined tetanus-diphtheria immunization of adults: use of small doses of diphtheria toxoid. Am J
Public Health 1954;44:1537--45.
- Edsall G, Elliott MW, Peebles TC, Eldred MC. Excessive use of tetanus toxoid boosters. JAMA 1967;202:111--3.
- Galazka AM, Robertson SE. Immunization against diphtheria with special emphasis on immunization of adults. Vaccine 1996;14:845--57.
- Pappernheimer A Jr, Edsall G, Lawrence H, Banton H. A study of reactions following administration of crude and purified diphtheria toxoid in
an adult population. Am J Hyg 1950;52:353--70.
- Relyveld EH, Bizzini B, Gupta RK. Rational approaches to reduce adverse reactions in man to vaccines containing tetanus and diphtheria
toxoids. Vaccine 1998;16:1016--23.
- James G, Longshore W Jr, Hendry J. Diphtheria immunization studies of students in an urban high school. Am J Hyg 1951;53:178--201.
- Lloyd JC, Haber P, Mootrey GT, Braun MM, Rhodes PH, Chen RT. Adverse event reporting rates following tetanus-diphtheria and tetanus
toxoid vaccinations: data from the Vaccine Adverse Event Reporting System (VAERS), 1991--1997. Vaccine 2003;21:3746--50.
- Froehlich H, Verma R. Arthus reaction to recombinant hepatitis B virus vaccine. Clin Infect Dis 2001;33:906--8.
- Moylett EH, Hanson IC. Mechanistic actions of the risks and adverse events associated with vaccine administration. J Allergy Clin
- Nikkels A, Nikkels-Tassoudji N, Pierard G. Cutaneous adverse reactions following anti-infective vaccinations. Am J Clin Dermatol 2005;6:79--87.
- Stratton KR, Howe CJ, Johnston RB Jr. Adverse events associated with childhood vaccines other than pertussis and rubella: summary of a
report from the Institute of Medicine. JAMA 1994;271:1602--5.
- Terr AI. Immune-complex allergic diseases. In: Parslow TG, Stites DP, Terr AI, et al., eds. Medical Immunology. 10 ed. New York, NY: Lange
Medical Books/McGraw-Hill Medical Publications Division; 2001.
- Ponvert C, Scheinmann P. Vaccine allergy and pseudo-allergy. Eur J Dermatol 2003;13:10--5.
- Halperin SA, Scheifele D, Mills E, et al. Nature, evolution, and appraisal of adverse events and antibody response associated with the
fifth consecutive dose of a five-component acellular pertussis-based combination vaccine. Vaccine 2003;21:2298--306.
- Liese JG, Stojanov S, Zink TH, et al. Safety and immunogenicity of Biken acellular pertussis vaccine in combination with diphtheria and
tetanus toxoid as a fifth dose at four to six years of age. Pediatr Infect Dis J 2001;20:981--8.
- Scheifele DW, Halperin SA, Ferguson AC. Assessment of injection site reactions to an acellular pertussis-based combination vaccine,
including novel use of skin tests with vaccine antigens. Vaccine 2001;19:4720--6.
- Rennels MB, Deloria MA, Pichichero ME, et al. Extensive limb swelling after booster doses of acellular pertussis-tetanus-diphtheria
vaccine. Pediatrics 2000;105:12.
- CDC. Use of diphtheria toxoid-tetanus toxoid-acellular pertussis vaccine as a five-dose series: supplemental recommendations of the
Advisory Committee on Immunization Practices (ACIP). MMWR 2000;49(No. RR-13).
- Woo EJ, Burwen DR, Gatumu SN, Ball R. Extensive limb swelling after immunization: reports to the Vaccine Adverse Event Reporting
System. Clin Infect Dis 2003;37:351--8.
- Rennels MB. Extensive swelling reactions occurring after booster doses of diphtheria-tetanus-acellular pertussis vaccines. Semin Pediatr Infect
- CDC. General recommendations on immunization: recommendations of the Advisory Committee on Immunization Practices (ACIP).
MMWR 2006;55(No. RR-15).
- Halperin S, Sweet L, Baxendale D. How soon after a prior tetanus-diphtheria vaccination can one give adult formulation
tetanus-diphtheria-acellular pertussis vaccine? Pediatr Infect Dis J 2006;25:195--200.
- David ST, Hemsley C, Pasquali PE, Larke B, Buxton JA, Lior LY. Enhanced surveillance for vaccine-associated adverse events: dtap catch-up
of high school students in Yukon. Can Commun Dis Rep 2005;31:117--26.
- Public Health Agency of Canada. An advisory committee statement
(ACS), National Advisory Committee on Immunization (NACI): statement
on adult/adolescent formulation of combined acellular pertussis, tetanus, and diphtheria vaccine. Can Commun Dis Rep 2000;26(No. ACS-1).
- CDC. Prevention and control of meningococcal disease: recommendations of the Advisory Committee on Immunization Practices
(ACIP). MMWR 2005;54(No. RR-7).
- Food and Drug Administration. Product approval information---licensing action, package insert: Meningococcal (Groups A,C,Y, and
W-135) Polysaccharide Diphtheria Toxoid Conjugate Vaccine Menactra. Aventis Pasteur. Rockville, MD: US Department of Health and Human
Services, Food and Drug Administration, Center for Biologics Evaluation and Research; 2004. Available at
- Food and Drug Administration. Aventis Pasteur. Menactra briefing document. Bethesda, MD: US Department of Health and Human
Services, Center for Biologic Evaluation and Research; September 22, 2004. Available at
- CDC. Update: vaccine side effects, adverse reactions, contraindications, and precautions: recommendation of the Advisory Committee
on Immunization Practices (ACIP). MMWR 1996;45(No. RR-12).
- Moore DL, Le Saux N, Scheifele D, Halperin SA. Lack of evidence of encephalopathy related to pertussis vaccine: active surveillance by
IMPACT, Canada, 1993--2002. Pediatr Infect Dis J 2004;23:568--71.
- Pollard JD, Selby G. Relapsing neuropathy due to tetanus toxoid: report of a case. J Neurol Sci 1978;37:113--25.
- Tuttle J, Chen RT, Rantala H, Cherry JD, Rhodes PH, Hadler S. The risk of Guillain-Barre syndrome after tetanus-toxoid-containing vaccines
in adults and children in the United States. Am J Public Health 1997;87:2045--8.
- CDC. Guide to contraindications to vaccination. Atlanta, GA: US Department of Health and Human Services, CDC; 2003.
- Fenichel G. Assessment: neurologic risk of immunization: report of the Therapeutics and Technology Assessment Subcommittee of the
American Academy of Neurology. Neurology 1999;52:1546--52.
- Pichichero ME, Treanor J. Economic impact of pertussis. Arch Pediatr Adolesc Med 1997;151:35--40.
- Lee LH, Pichichero ME. Costs of illness due to Bordetella pertussis in families. Arch Fam Med 2000;9:989--96.
- O'Brien JA, Caro JJ. Hospitalization for pertussis: profiles and case costs by age. BMC Infect Dis 2005;5:57.
- Purdy KW, Hay JW, Botteman MF, Ward JI. Evaluation of strategies for use of acellular pertussis vaccine in adolescents and adults: a
cost-benefit analysis. Clin Infect Dis 2004;39:20--8.
- Lee GM, LeBaron C, Murphy TV, Lett S, Schauer S, Lieu TA. Pertussis in adolescents and adults: should we vaccinate? Pediatrics
- Lee GM, Murphy TV, Lett S, et al. Cost-effectiveness of pertussis vaccination in adults. Am J Prev Med 2007(In press).
- Chapman RH, Stone PW, Sandberg EA, Bell C, Neumann PJ. A comprehensive league table of cost-utility ratios and a sub-table of
"panel-worthy" studies. Med Decis Making 2000;20:451--67.
- Stone PW, Teutsch S, Chapman RH, Bell C, Goldie SJ, Neumann PJ. Cost-utility analyses of clinical preventive services: published ratios,
1976--1997. Am J Prev Med 2000;19:15--23.
- Winkelmayer WC, Weinstein MC, Mittleman MA, Glynn RJ, Pliskin JS. Health economic evaluations: the special case of end-stage renal
disease treatment. Med Decis Making 2002;22:417--30.
- CDC. Percentage of persons aged >18 years who reported receiving influenza or pneumococcal vaccine or tetanus toxoid, by age and
selected characteristics---National Health Interview Survey, United States, 1999. Available at
- CDC. Record of the Meeting of the Advisory Committee on Immunization Practices, October 26--27, 2005. Available at
- CDC. Prevention and control of influenza: recommendations of the Advisory Committee on Immunization Practices (ACIP).
MMWR 2006;55(No. RR-8).
- Clark SJ, Adolphe S, Davis MM, Cowan AE, Kretsinger K. Attitudes of U.S. obstetricians toward a combined
tetanus-diphtheria-acellular pertussis vaccine for adults. Infect Dis Obstet 2006;87:1--5.
- American College of Obstetrics and Gynecology. Immunization during pregnancy. ACOG Committee Opinion 2003;282:1--6.
- Henshaw SK. Unintended pregnancy in the United States. Fam Plann Perspect 1998;30:24--30.
- CDC. Measles, mumps, and rubella---vaccine use and strategies for elimination of measles, rubella, and congenital rubella syndrome and
control of mumps: recommendations of the Advisory Committee on Immunization Practices (ACIP). MMWR 1998;47(No. RR-8).
- Bascom S, Miller S, Greenblatt J. Assessment of perinatal hepatitis B and rubella prevention in New Hampshire delivery hospitals.
- Schrag SJ, Fiore AE, Gonik B, et al. Vaccination and perinatal infection prevention practices among obstetrician-gynecologists. Obstet
- Van Rie A, Hethcote HW. Adolescent and adult pertussis vaccination: computer simulations of five new strategies. Vaccine 2004;22:3154--65.
- Scuffham PA, McIntyre PB. Pertussis vaccination strategies for neonates---an exploratory cost-effectiveness analysis. Vaccine 2004;22:2953--64.
- CDC. Recommended adult immunization schedule---United States, October 2006--September 2007. MMWR 2006;55:Q1--Q4.
- Maclennan R, Schofield FD, Pittman M, Hardegree MC, Barile MF. Immunization against neonatal tetanus in New Guinea: antitoxin response
of pregnant women to adjuvant and plain toxoids. Bull World Health Organ 1965;32:683--97.
- Schofield F, Tucker V, Westbrook G. Neonatal tetanus in New Guinea: effect of active immunization in pregnancy. BMJ 1961;5255:785--9.
- Healy CM, Munoz FM, Rench MA, Halasa NB, Edwards KM, Baker CJ. Prevalence of pertussis antibodies in maternal delivery, cord, and
infant serum. J Infect Dis 2004;190:335--40.
- Van Savage J, Decker MD, Edwards KM, Sell SH, Karzon DT. Natural history of pertussis antibody in the infant and effect on vaccine response.
J Infect Dis 1990;161:487--92.
- Van Rie A, Wendelboe AM, Englund JA. Role of maternal pertussis antibodies in infants. Pediatr Infect Dis J 2005;24:S62--S5.
- Englund JA, Anderson EL, Reed GF, et al. The effect of maternal antibody on the serologic response and the incidence of adverse reactions
after primary immunization with acellular and whole-cell pertussis vaccines combined with diphtheria and tetanus toxoids. Pediatrics 1995;96:580--4.
- Halsey N, Galazka A. The efficacy of DPT and oral poliomyelitis immunization schedules initiated from birth to 12 weeks of age. Bull
World Health Organ 1985;63:1151--69.
- Siegrist CA. Mechanisms by which maternal antibodies influence infant vaccine responses: review of hypotheses and definition of
main determinants. Vaccine 2003;21:3406--12.
- Food and Drug Administration. Product approval
information---licensing action, package insert: Td. Tetanus and Diphtheria Toxoids
Adsorbed For Adult Use. Massachusetts Public Health Biologic Laboratories. Rockville, MD: US Department of Health and Human Services, Food
and Drug Administration, Center for Biologics Evaluation and Research; 2000.
- Food and Drug Administration. Product approval
information---licensing action, package insert: Td. Tetanus and Diphtheria Toxoids
Adsorbed For Adult Use. sanofi pasteur. Rockville, MD: US Department of Health and Human Services, Food and Drug Administration, Center
for Biologics Evaluation and Research; 2003.
- Food and Drug Administration. Product approval
information---licensing action, package insert: DECAVACTM. Tetanus and Diphtheria
Toxoids Adsorbed For Adult Use. sanofi pasteur. Rockville, MD: US Department of Health and Human Services, Food and Drug Administration, Center
for Biologics Evaluation and Research; 2004.
- Food and Drug Administration. Product approval
information---licensing action, package insert: TENIVACTM. Tetanus and Diphtheria
Toxoids Adsorbed For Adult Use. sanofi pasteur. Rockville, MD: US Department of Health and Human Services, Food and Drug Administration, Center
for Biologics Evaluation and Research; 2005.
- Czeizel A, Rockenbauer M. Tetanus toxoid and congenital abnomalities. Int J Gynaecol Obstet 1999;64:253--8.
- Silveira CM, Caceres VM, Dutra MG, Lopes-Camelo J, Castilla EE. Safety of tetanus toxoid in pregnant women: a hospital-based
case-control study of congenital anomalies. Bull World Health Organ 1995;73:605--8.
- Altemeir WA, Ayoub EM. Erythromycin prophylaxis for pertussis. Pediatrics 1977;59:623--5.
- Christie CD, Glover AM, Willke MJ, Marx ML, Reising SF, Hutchinson NM. Containment of pertussis in the regional pediatric hospital
during the Greater Cincinnati epidemic of 1993. Infect Control Hosp Epidemiol 1995;16:556--63.
- Kurt TL, Yeager AS, Guenette S, Dunlop S. Spread of pertussis by hospital staff. JAMA 1972;221:264--7.
- Shefer A, Dales L, Nelson M, Werner B, Baron R, Jackson R. Use and safety of acellular pertussis vaccine among adult hospital staff during
an outbreak of pertussis. J Infect Dis 1995;171:1053--6.
- Fisher MC, Long SS, McGowan KL, Kaselis E, Smith DG. Outbreak of pertussis in a residential facility for handicapped people. J
- Partiarca PA, Steketee RW, Biellik RJ, et al. Outbreaks of pertussis in the United States: the Wisconsin experience. Tokai J Exp.Clin
- Steketee RW, Burstyn DG, Wassilak SG, et al. A comparison of laboratory and clinical methods for diagnosing pertussis in an outbreak in a
facility for the developmentally disabled. J Infect Dis 1988;157:441--9.
- Tanaka Y, Fujinaga K, Goto A, et al. Outbreak of pertussis in a residential facility for handicapped people. Developments in
Biological Standardization 1991;73:329--32.
- Halsey NA, Welling MA, Lehman RM. Nosocomial pertussis: a failure of erythromycin treatment and prophylaxis. American Journal of
Diseases of Children 1980;134:521--2.
- Matlow AG, Nelson S, Wray R, Cox P. Nosocomial acquisition of pertussis diagnosed by polymerase chain reaction. Infect Control
Hosp Epidemiol 1997;18:715--6.
- CDC. Outbreaks of pertussis associated with hospitals---Kentucky, Pennsylvania, and Oregon. MMWR 2003;54:67--71.
- Linnemann CC Jr, Ramundo N, Perlstein PH, Minton SD, Englender GS. Use of pertussis vaccine in an epidemic involving hospital staff.
- Bryant KA, Humbaugh K, Brothers K. Measures to control an outbreak of pertussis in a neonatal intermediate care nursery after exposure to
a healthcare worker. Infect Control and Hosp Epidemiol 2006;27:6--12.
- Spearing NM, Horvath RL, McCormack JG. Pertussis: adults as a source in healthcare settings. Med J Aust 2002;177:568--9.
- Valenti WM, Pincus PH, Messner MK. Nosocomial pertussis: possible spread by a hospital visitor. Am J Dis Child 1980;134:520--1.
- Bamberger E, Starets-Haham O, Greenberg D, et al. Adult pertussis is hazardous for the newborn. Infect Control Hosp Epidemiol
- McCall BJ, Tilse M, Burt B, Watt P, Barnett M, McCormack JG. Infection control and public health aspects of a case of pertussis infection in
a maternity health care worker. Commun Dis Intell 2002;26:584--6.
- Calugar A, Ortega-Sanchez IR, Tiwari T, Oakes L, Jahre JA, Murphy TV. Nosocomial pertussis: costs of an outbreak and benefits of
vaccinating health care workers. Clin Infect Dis 2006;42:981--8.
- Gehanno JF, Pestel-Caron M, Nouvellon M, Caillard JF. Nosocomial pertussis in healthcare workers from a pediatric emergency unit in
France. Infect Control Hosp Epidemiol 199;20:549--52.
- Boulay BR, Murray CJ, Ptak J, Kirkland KB, Montero J, Talbot EA. An outbreak of pertussis in a hematology-oncology care unit: implications
for adult vaccination policy. Infect Control Hosp Epidemiol 2006;27:92--5.
- Ward A, Caro J, Bassinet L, Housset B, O'Brien JA, Guiso N. Health and economic consequences of an outbreak of pertussis among
healthcare workers in a hospital in France. Infect Control Hosp Epidemiol 2005;26:288--92.
- Baggett HC, Duchin JS, Shelton W, et al. Two nosocomial pertussis outbreaks and their associated costs---King County, Washington, 2004.
Infect Control and Hosp Epidemiol (In press).
- Lane NE, Paul RI, Bratcher DF, Stover BH. A survey of policies at children's hospitals regarding immunity of healthcare workers: are
physicians protected? Infect Control Hosp Epidemiol 1997;18:400--4.
- Zivna I, Bergin D, Casavant J, et al. Impact of Bordetella pertussis exposures on a Massachusetts tertiary care medical system, FY 2004.
Infect Control Hosp Epidemiol 2007(In press).
- Haiduven DJ, Hench CP, Simpkins SM, Stevens DA. Standardized management of patients and employees exposed to pertussis. Infect
Control Hosp Epidemiol 1998;19:861--4.
- Daskalaki I, Hennesey P, Hubler R, Long SS. Exposure of pediatric HCWs to pertussis is unavoidable and management is resource
intensive [Abstract no. 1173]. 43rd Meeting of the Infectious Disease Society of America, 2005.
- Deville JG, Cherry JD, Christenson PD, et al. Frequency of unrecognized
Bordetella pertussis infections in adults. Clin Infect Dis
- Wright SW, Decker MD, Edwards KM. Incidence of pertussis infection in healthcare workers. Infect Control Hosp Epidemiol 1999;20:120--3.
- Pascual FB, McCall CL, McMurtray A, Payton T, Smith F, Bisgard KM. Outbreak of pertussis among healthcare workers in a hospital
surgical unit. Infect Control Hosp Epidemiol 2006;27:546--52.
- Alles SJ WB. Role of health care workers in a pertussis outbreak in a neonatal intensive care unit [Abstract # 0804]. The First
International Neonatal Vaccination Workshop, March 2--4, 2004, McLean, VA, 2006.
- Giugliani C, Vidal-Trecan G, Troare S, et al. Feasibility of azithromycin prophylaxis during a pertussis outbreak among healthcare workers in
a university hospital in Paris. Infect Control and Hosp Epidemiol 2006;27:626--9.
- Misra-Hebert AD, Kay R, Stoller JK. A review of physician turnover: rates, causes, and consequences. Am J Med Qual 2004;19:56--66.
- Ruhe M, Gotler RS, Goodwin MA, Stange KC. Physician and staff turnover in community primary care practice. J Ambul Care
- Bohlke K, Davis RL, Marcy SM, et al. Risk of anaphylaxis after vaccination of children and adolescents. Pediatrics 2003;112:815--20.
- Zhou F, Reef S, Massoudi M, et al. An economic analysis of the current universal 2-dose measles-mumps-rubella vaccination program in
the United States. J Infect Dis 2004;189:S131--S45.
- CDC. Vaccine price list. Available at
- Talbot TR, Bradley SE, Cosgrove SE, Ruef C, Siegel JD, Weber DJ. Influenza vaccination of healthcare workers and vaccine allocation
for healthcare workers during vaccine shortages. Infect Control Hosp Epidemiol 2005;26:882--90.
- King WA. Brief report: influenza vaccination and health care workers in the United States. J Gen Int Med 2006;21:1--4.
- CDC. Interventions to increase influenza vaccination of health-care workers---California and Minnesota. MMWR 2005;54:196--9.
- Edwards KM, Talbot TR. The challenges of pertussis outbreaks in healthcare facilities: is there a light at the end of the tunnel? Infect
Control Hosp Epidemiol 2006;27:537--40.
- CDC. Immunization of health-care workers: recommendations of the Advisory Committee on Immunization Practices (ACIP) and the
Hospital Infection Control Practices Advisory Committee (HICPAC). MMWR 1997;46(No. RR-18).
- Haiduven DJ, Hench CP, Simpkins SM, Scott KE, Stevens DA. Management of varicella-vaccinated patients and employees exposed to varicella
in the healthcare setting. Infect Control Hosp Epidemiol 2003;24:538--43.
- Josephson A, Karanfil L, Gombert ME. Strategies for the management of varicella-susceptible healthcare workers after a known exposure.
Infect Control Hosp Epidemiol 1990;11:309--13.
- Klevens RM, Kupronis BA, Lawton R, et al. Monitoring health care workers after smallpox vaccination: findings from the Hospital
Smallpox Vaccination-Monitoring System. Am J Infect Control 2005;33:315--9.
- CDC. Recommendations for using smallpox vaccine in a pre-event vaccination program: supplemental recommendations of the
Advisory Committee on Immunization Practices (ACIP) and the Healthcare Infection Control Practices Advisory Committee (HICPAC).
- CDC. Recommended Childhood and Adolescent Immunization Schedule---United States, 2006. MMWR 2006;54:Q1--Q4.
- Graham DR, Dan BB, Bertagnoll P, Dixon RE. Cutaneous inflammation caused by inadvertent intradermal administration of DTP instead
of PPD. Am J Public Health 1981;71:1040--3.
- Institute for Safe Medication Practices. Hazard alert! Confusion between tetanus diptheria toxoid (Td) and tuberculin purified protein
derivative (PPD) led to unnecessary treatment. Huntingdon Valley, PA: Institute for Safe Medication Practices. Available at
- CDC. Inadvertent intradermal administration of tetanus toxoid--containing vaccines instead of tuberculosis skin tests. MMWR 2004;53:662--4.
- American Academy of Pediatrics. Pertussis. In: Pickering LK, ed. Red Book: 2006 Report of the Committee on Infectious Diseases. 26 ed.
Elk Grove Village, IL: American Academy of Pediatrics; 2006.
- CDC. Provisional recommendations for use of Tdap in pregnant women. Available at
* Booster response defined as a fourfold rise in antibody concentration if the prevaccination concentration was equal to or below the cutoff value and a
twofold rise in antibody concentration if the prevaccination concentration was above the cutoff value. The cutoff value for tetanus was 2.7 IU/mL. The cutoff
value for diphtheria was 2.56 IU/mL.
A booster response for each antigen was defined as a fourfold rise in antibody concentration if the prevaccination concentration was equal to or below
the cutoff value and a twofold rise in antibody concentration if the prevaccination concentration was above the cutoff value. The cutoff values for
pertussis antigens were 85 EU/mL for PT, 170 EU/mL for FHA, 115 EU/mL for PRN, and 285 EU/mL for FIM.
§ A hemagglutinin inhibition titer >1:40 IU/mL for each influenza antigen was considered seropositive.
¶ The noninferiority criterion was met if the upper limit of the 95% confidence interval on the difference in the percentage of subjects in the two groups
(rate following simultaneous vaccination minus rate following sequential vaccination) was <10%.
** An antihepatitis B surface antigen of >10 mIU/mL was considered seroprotective.
U.S. Food and Drug Administration Pregnancy Category C. Animal studies have documented an adverse effect, and no adequate and
well-controlled studies in pregnant women have been conducted or no animal studies and no adequate and well-controlled studies in pregnant women have been conducted.
§§ Recommendations for use of Tdap among HCP were reviewed and are supported by the members of HICPAC.
¶¶ Hospitals, as defined by the Joint Commission on Accreditation of Healthcare Organizations, do not include long-term--care facilities such
as nursing homes, skilled-nursing facilities, or rehabilitation and convalescent-care facilities. Ambulatory-care settings include all outpatient and
*** For adolescents, any progressive neurologic disorder (including progressive encephalopathy) is considered a precaution for receipt of Tdap. For
adults, progressive neurologic disorders are considered precautions only if the condition is unstable (CDC. Preventing tetanus, diphtheria, and pertussis
among adolescents: use of tetanus toxoid, reduced diphtheria toxoid and acellular pertussis vaccines: recommendations of the Advisory Committee on
Immunization Practices [ACIP]. MMWR 2006;55[No. RR-3]).
Advisory Committee on Immunization Practices Pertussis Working Group
Chairman: Dale Morse, MD, Albany, New York.
Members: Dennis Brooks, MD, Baltimore, Maryland; Karen R. Broder, MD, Atlanta, Georgia; James Cherry, MD, Los Angeles, California;
Allison Christ, MD, Washington, District of Columbia; Richard Clover, MD, Louisville, Kentucky; James Cheek, MD, Albuquerque, New Mexico;
Amanda Cohn, MD, Atlanta, Georgia; Margaret M. Cortese, MD, Atlanta, Georgia; Shelley Deeks, MD, Toronto, Ontario, Canada; Lorraine Kay
Duncan, Portland, Oregon; Geoffrey S. Evans, MD, Rockville, Maryland; Theresa Finn, PhD, Rockville, Maryland; Stanley A. Gall, MD, Louisville,
Kentucky; Andrea Gelzer, MD, Hartford, Connecticut; Steve Gordon, MD, Cleveland, Ohio; Janet Gilsdorf, MD, Ann Arbor, Michigan; John Iskander,
MD, Atlanta, Georgia; M. Patricia Joyce, MD, Atlanta, Georgia; David Klein, PhD, Bethesda, Maryland; Katrina Kretsinger, MD, Atlanta, Georgia;
Grace Lee, MD, Boston, Massachusetts; Susan Lett, MD, Boston, Massachusetts; Sarah Long, MD, Philadelphia, Pennsylvania; Bruce Meade, PhD,
Rockville, Maryland; Christina Mijalski, MPH, Altanta, Georgia; Julie Morita, MD, Chicago, Illinois; Trudy V. Murphy, MD, Atlanta, Georgia; Kathleen
Neuzil, MD, Seattle, Washington; Greg Poland, MD, Rochester, Minnesota; William Schaffner, MD, Nashville, Tennessee; Ann T. Schwartz, MD,
Rockville, Maryland; Jane Siegal, MD, Dallas, Texas; Barbara Slade, MD, Atlanta, Georgia; Raymond Strikas, MD, Atlanta, Georgia; Tejpratap Tiwari,
MD, Atlanta, Georgia; Gregory Wallace, MD, Atlanta, Georgia; Patricia Whitley-Williams, MD, Washington, District of Columbia.
Advisory Committee on Immunization Practices
Membership List, October 2005
Chairman: Jon Abramson, MD, Wake Forest University School of Medicine, Winston-Salem, North Carolina.
Executive Secretary: Larry K. Pickering, MD, Senior Advisor to the Director, National Center for Immunizations and Respiratory Diseases
(proposed), CDC, Atlanta, Georgia.
Members: Jon S. Abramson, MD, Wake Forest University School of Medicine, Winston-Salem, North Carolina; Ban Mishu Allos, MD,
Vanderbilt University School of Medicine, Nashville, Tennessee; Robert Beck, Community Representative, Palmyra, Virginia; Judith Campbell, MD,
Baylor College of Medicine, Houston, Texas; Reginald Finger, MD, Focus on the Family, Colorado Springs, Colorado; Janet R. Gilsdorf, MD, University
of Michigan, Ann Arbor, Michigan; Harry Hull, MD, Minnesota Department of Health, Minneapolis, Minnesota; Tracy Lieu, MD, Harvard
Pilgram Healthcare and Harvard Medical School, Boston, Massachusetts; Edgar K. Marcuse, MD, Children's Hospital and Regional Medical Center,
Seattle, Washington; Julie Morita, MD, Chicago Department of Public Health, Chicago, Illinois; Dale Morse, MD, New York State Department of
Health, Albany, New York; Gregory A. Poland, MD, Mayo Medical School, Rochester, Minnesota; Patricia Stinchfield, Children's Hospitals and Clinics,
St. Paul, Minnesota; John J. Treanor, MD, University of Rochester School of Medicine and Dentistry, Rochester, New York; and Robin J. Womeodu,
MD, University of Tennessee Health Sciences Center, Memphis, Tennessee.
Ex-Officio Members: James E. Cheek, MD, Indian Health Service, Albuquerque, New Mexico; Wayne Hachey, DO, Department of Defense,
Falls Church, Virginia; Geoffrey S. Evans, MD, Health Resources and Services Administration, Rockville, Maryland; Bruce Gellin, MD, National
Vaccine Program Office, Washington, DC; Linda Murphy, Centers for Medicare and Medicaid Services, Baltimore, Maryland; George T. Curlin, MD,
National Institutes of Health, Bethesda, Maryland; Norman Baylor, PhD, Office of Vaccines Research Review, Rockville, Maryland; and Kristin Lee
Nichol, MD, Department of Veterans Affairs, Minneapolis, Minnesota.
Liaison Representatives: American Academy of Family Physicians, Jonathan Temte, MD, Madison, Wisconsin, and Doug Campos-Outcalt,
MD, Phoenix, Arizona; American Academy of Pediatrics, Keith Powell, MD, Akron, Ohio, and Carol Baker, MD, Houston, Texas; America's
Health Insurance Plans, Andrea Gelzer, MD, Hartford, Connecticut; American College Health Association, James C. Turner, MD, Charlottesville,
Virginia; American College of Obstetricians and Gynecologists, Stanley Gall, MD, Louisville, Kentucky; American College of Physicians, Kathleen M.
Neuzil, MD, Seattle, Washington; American Medical Association, Litjen Tan, PhD, Chicago, Illinois; American Pharmacists Association, Stephan L.
Foster, PharmD, Memphis, Tennessee; Association of Teachers of Preventive Medicine, W. Paul McKinney, MD, Louisville, Kentucky; Biotechnology
Industry Organization, Clement Lewin, PhD, Orange, Connecticut; Canadian National Advisory Committee on Immunization, Monica Naus, MD,
Vancouver, British Columbia, Canada; Healthcare Infection Control Practices Advisory Committee, Steve Gordon, MD, Cleveland, Ohio; Infectious
Diseases Society of America, Samuel L. Katz, MD, Durham, North Carolina; London Department of Health, David M. Salisbury, MD, London,
United Kingdom; National Association of County and City Health Officials, Nancy Bennett, MD, Rochester, New York; National Coalition for
Adult Immunization, David A. Neumann, PhD, Alexandria, Virginia; National Foundation for Infectious Diseases, William Schaffner, MD,
Nashville, Tennessee; National Immunization Council and Child Health Program, Mexico, Romeo Rodriguez, Mexico City, Mexico; National
Medical Association, Patricia Whitley-Williams, MD, New Brunswick, New Jersey; National Vaccine Advisory Committee, Charles Helms, MD, Iowa
City, Iowa; Pharmaceutical Research and Manufacturers of America, Damian A. Braga, MBA, Swiftwater, Pennsylvania, and Peter Paradiso,
PhD, Collegeville, Pennsylvania; and Society for Adolescent Medicine, Amy B. Middleman, MD, Houston, Texas.
Healthcare Infection Control Practices Advisory Committee
Membership List, April 2006
Chairman: Patrick J. Brennan, MD, University of Pennsylvania School of Medicine, Philadelphia, Pennsylvania.
Executive Secretary (Acting): Michael Bell, MD, CDC, Atlanta, Georgia.
Members: Vicki L. Brinsko, Vanderbilt University Medical Center, Nashville, Tennessee; E. Patchen Dellinger, MD, University of Washington
School of Medicine, Seattle, Washington; Jeffrey Engel, MD, Head General Communicable Disease Control Branch, North Carolina State
Epidemiologist, Raleigh, North Carolina; Steven M. Gordon, MD, Cleveland Clinic Foundation, Cleveland, Ohio; Lizzie J. Harrell, PhD, Duke University
Medical Center, Durham, North Carolina; Carol O'Boyle, PhD, University of Minnesota, Minneapolis, Minnesota; David Alexander Pegues, MD,
David Geffen School of Medicine at UCLA, Los Angeles, California; Dennis M. Perrotta, PhD, Univ of Texas School of Public Health Department of
Health, Texas A&M Univ School of Rural Public Health, Smithville, Texas; Harriett M. Pitt, Director, Epidemiology, Long Beach Memorial Medical
Center, Los Angeles, California; Keith M. Ramsey, MD, Brody School of Medicine at East Carolina University, Greenville, North Carolina; Nalini Singh,
MD, George Washington University, Children's National Medical Center, Washington, District of Columbia; Philip W. Smith, MD, University of
Nebraska Medical Center, Omaha, Nebraska; Kurt Brown Stevenson, MD, Ohio State University Medical Center, Columbus, Ohio.
Return to top.
Return to top.
Return to top.
Return to top.
Return to top.
Return to top.
Return to top.
Return to top.
Return to top.
Return to top.
Return to top.
Return to top.
Return to top.
Return to top.
Return to top.
Use of trade names and commercial sources is for identification only and does not imply endorsement by the U.S. Department of
Health and Human Services.
References to non-CDC sites on the Internet are
provided as a service to MMWR readers and do not constitute or imply
endorsement of these organizations or their programs by CDC or the U.S.
Department of Health and Human Services. CDC is not responsible for the content
of pages found at these sites. URL addresses listed in MMWR were current as of
the date of publication.
All MMWR HTML versions of articles are electronic conversions from ASCII text
into HTML. This conversion may have resulted in character translation or format errors in the HTML version.
Users should not rely on this HTML document, but are referred to the electronic PDF version and/or
the original MMWR paper copy for the official text, figures, and tables.
An original paper copy of this issue can be obtained from the Superintendent of Documents,
U.S. Government Printing Office (GPO), Washington, DC 20402-9371; telephone: (202) 512-1800.
Contact GPO for current prices.**Questions or messages regarding errors in formatting should be addressed to
Date last reviewed: 12/8/2006 | 1 | 28 |
<urn:uuid:86c836c4-d4ef-466e-ab4f-1200c7c6b615> | If you enjoy this site
please consider making a donation.
Add Stories & Photos
Airfields of WW2
Royal Air Force
Prisoners of War
Secrets of WWII
Ships of WWII
Women at War
Those Who Served
The Great War
How to add Memories
Add Your Memories
Got a Question? Please add it to: TWMP on Facebook
Can you Answer?
School Study Center
Your Family History
World War 2 Two II WW2 WWII
No: 7 Squadron
No: 7 Squadron was at Doncaster at the outbreak of the Second World War, engaged in training crews flying Hampdens to operational standard as part of 5 Group. In April 1940, 7 Sqd lost its identity by being absorbed into No.16 OTU. The squadron was re-formed at Finningley at the end of April but was disbanded three weeks later. It was re-formed in August 1940, at Leeming and became the first squadron in Bomber Command to fly four-engined bombers, being equipped with Short Stirlings. They began operations in Feburay 1941. In 1943 it was one of the five squadrons selected to form the he Pathfinder Force and converted to Lancasters in May 1943. One of the most famous raids for 7 squadrom was Peenemunde in August 1943.
Airfields No: 7 Squadron flew from.
- Doncaster, Yorkshire. Sept 1939
- Finningley, Yorkshire 15th Sep 1939 to 23rd Sep 1939
- Upper Heyford, Oxfordshire 23rd Sep 1939 to 8th April 1940
- Finningley Yorkshire 30th April 1940 to 20th May 1940
- Leeming Yorkshire 1st Aug 1940 to 29th Oct 1940
- Oakington Cambridgeshire 29th Oct 1940 to 24th Jul 1945
List of those who served with No: 7 Squadron during The Second World War
- Sgt. Reginald George Brown (d.29th Jan 1944)
- W/O N. J. Clifford pilot Read his Story.
- LAC John James Copley DFM Read his Story.
- Sgt William Richard John Craze Read his Story.
- Sgt. Leslie Ernest James Davenport nav. Read his Story.
- Sgt. William Fraser navigator (d.29th Jan 1944)
- Flt.Sgt. Henry Raymond Glover (d.25th June 1943) Read his Story.
- Sgt. William Edward Goodman Read his Story.
- F/S S. Jarvis pilot Read his Story.
- Flight Sargent Stanley Melville Liddle (d.29th Jan 1944) Read his Story.
- Thomas Reginald Nixon (d.20th Feb 1944) Read his Story.
- Sqd.Ldr. Leonard James Saltmarsh DFC and bar. Read his Story.
- Sgt. Ralph George Sharp pilot Read his Story.
- Sgt. R. W. Wilmott (d.29th Jan 1944)
Flight Sargent Stanley Melville Liddle 7 Squadron (d.29th Jan 1944)
My Wife's brother Stanley Liddle was killed in the crash of Lancaster JA-718 in Northern Germany on the 29th of January 1944, there were two survivors, the Pilot W/O N J Clifford and F/S S. Jarvis. They became POWs in Stalag Luft 6 and Stalag 357. From letters written by Stanley before his death in that crash, we believe that these two RAF members were English. It is our hope that we find either or both of these men so that we can learn more about that period of Stan's life.
The crew were:
- Sgt W.Fraser
- Sgt R.W.Willmott
- Sgt R.G.Brown
- F/S S.M.Liddle RCAF
- Sgt R.G.Sharp
- W/O N.J.Clifford
- F/S S.Jarvis
Sgt. Ralph George Sharp pilot 7 Sqd.
W/O Clifford was the pilot of Lancaster JA-718, he survived the crash on the the 29th of January 1944 and was held as a prisoner of war in Stalag Luft 6 and Stalag 357. We would love to hear from him or his family as my wife's brother Stanley Liddle was one of his crew.
F/S S. Jarvis pilot 7 Sqd.
F/S Jarvis survived the crash of Lancaster JA-718 on the the 29th of January 1944 and was held as a prisoner of war in Stalag Luft 6 and Stalag 357. We would love to hear from him or his family as my wife's brother Stanley Liddle was one of his crewmates.
W/O N. J. Clifford pilot 7 Sqd.
W/O Clifford was the pilot of Lancaster JA-718, he survived the crash on the the 29th of January 1944 and was held as a prisoner of war in Stalag Luft 6 and Stalag 357. We would love to hear from him or his family as my wife's brother Stanley Liddle was one of his crew.
LAC John James Copley DFM 38 Squadron
My father, John James Copley DFM, was the first in WW2 to be awarded the DFM from RAF Marham. Last year my family and I were invited to the opening of a new barracks there, Copley Block, named after my father. I have information on being awarded the DFM in 1940, and information on the POW camps he was held in after being shot down and captured in 1941, including some information on the Long March and Run up the Road that he was part of. A friend and I visited Denmark this year and contacted an historian who has dived on the wreck of the aircraft my father was in, and I have held some of the parts of the aircraft that have been brought back from the sea.
Born in 1912 John entered the RAF in July 1935 as ACH/Mate, later in the year gaining the rank of AC2. He was trained firstly as Flight Rigger and was posted to 38 Squadron at Mildenhall 17th July 1936, becoming an AC1 31st December 1936. He arrived at the newly opened Marham Aerodrome with 38 Squadron on 5th May 1937. His personal diary for 1937 documents this event and gives some details of training and night flights. He became Flight Rigger Air Gunner on 19th July 1938, promoted to LAC 31st December 1938.
On the 3rd December 1939, 24 Wellington bombers from 38, 115 and 149 Squadrons attacked German warships off Heligoland, Germany. Hits were made on a cruiser and armed trawler during the raid. During the raid 38 Squadron Wellington captain, Pilot Officer E T Odore (later Group Captain DFC, AFC) strayed away from the main formation and was attacked by German fighters. Attacked from astern by an Me.109, LAC Copley, rear gunner, was able to fire two bursts at point blank range (200yards) and saw the fighter climb sharply and stall, falling out of control out of the sky into the sea. The Wellington was liberally peppered with bullets and cannon shells, some of which penetrated the port engine tank and cylinder. Unknown to the crew it slashed the port undercarriage. On landing back at base in RAF Marham, the aircraft ground looped due to the punctured port wheel. The rear turret wings were hanging in strips and there was a punctured petrol tank. All crew were evacuated quickly. When LAC Copley landed he found a German machine gun bullet lodged in the quick release box of his parachute buckle just touching his flesh. This he saved to remind him of how lucky he had been. It is now on show in the Yorkshire Air Museum at Elvington, with his DFM and other items of interest.
The Distinguished Flying Medal citation appeared in the London Gazette of 2nd January 1940. The DFM was presented to him at RAF Feltwell on 20th March 1940. LAC J J Copley DFM is first on the Honours board in Marham today. To pay honour to their local hero the village people of South Hiendley, Barnsley, South Yorkshire, presented him with a gold inscribed pocket watch, presented by Mr A F C Assinder, New Monkton colliery manager, in Felkirk Church village hall. John had worked at New Monkton colliery before joining the RAF.
On 27th July 1940 Copley was posted to 15 OUT at Harwell, to 214 Squadron at Stradishall, from there to 7 Squadron at Oakington Cambridgshire, 30th October1940. He was promoted to Sergeant, 31st December 1940; 7th May 1941 he became Flight Engineer.
29th September 1941 at 18.50, Stirling Mk.I serial number W7441 coded MG-Y , MG indicating No. 7 Squadron RAF, Y radio code (the aircraft Copley was in), took off from Oakington air base, England to bomb Stettin near the Oder river to the east of Berlin. Since the aircraft was meant to lead the attack, it was loaded with flares and fire bombs (a total of 18 SBCs) to be dropped over the target so that the other aircraft would be able to aim their bombs as fires broke out. The outward journey over the North Sea and Denmark went according to plan. When W7441 reached the east coast of Jutland it was attacked by a Messerschmidt Bf 110 Night Fighter. The gunners were able to avert the attack, then a moment later, W7441 was again attacked by the Bf 110 (from 3/1/NJG 1-3 Staffel of the first group in Nachtjagdgeschwader 1). The attack was carried out by Lieutenant Schmitz. High from the right side, he set the Stirling’s right wing ablaze. It crashed in Lillebaelt South of Brandso at 22.47. It was Lieutenant Schmitz's third confirmed kill.
Interrogation Report of Sergeant John J. Copley (V G Nielson police constable L H Rasch, police sergeant) following capture at Trappendal in Hejls:
'REPORT Tuesday 30.9.1941. After giving name rank number, date of birth, etc. he explained that he had been on board an aircraft, a four engine bomber, with six other airmen, refusing to give precise departure details. They had flown across north Germany, following orders to drop bombs over Stettin. While they were on their way they were attacked by German aircraft presumably from Heligoland or Sild. They engaged combat and the person questioned said shot down German aircraft. They discovered that their aircraft was on fire. The fire spread quickly and orders were given to bale out. This person does not believe that the rest of the crew escaped.
According to Copley the aircraft exploded and crashed near to the coast. He was shown a map, and points out a location between Anslet and Brandso or Branso and Funen without venturing the precise location of aircraft.
He had landed safely in his parachute which he said he had left in a small forest, whereupon he headed North on foot. During the landing he had hurt his left knee which was very painful. Approximately 500 metres away from forest he hid his safety jacket in an hedge after which he continued walking until later that night came to an outbuilding where he slept for a couple of hours in a straw stack. He then proceeded to the farm from where the police picked him up, Copley knowing he could not go on for much longer owing to injured left leg.
A reconstruction was then conducted with him and in the place he had previously mentioned his safety jacket was found. He then pointed out the forest where his parachute supposedly was, but since he had great difficulty walking, and the forest was inaccessible by car, he could not point out precise location. Constable Hubsmann, Christiansfeld, promised to search for the parachute with his police dog. Furthermore, Hubsmann reported that the police at Haderslev had caught two airmen from the same aircraft, information that pleased the English man very much. The person in question was then taken to Dr Dolmer in Hejls who treated his injured knee. The person was then taken to the criminal investigation office, where he was handed over to Hauptmann Knock and Hauptmann Mahler.'
'W7441 were leading bomber force to its target at Stettin; load consisted of incendiaries and flares. Task was to light up the target for the main force. This was just prior to the introduction of the Pathfinder Force. We left Oakington, 29th Sep about 7pm, taking northerly route over North Sea and Denmark to hit Stettin from the Baltic. However while approaching we were attacked by two 110 German night fighters. The first attacked from underneath astern and damaged port wing. The rear gunner, Fulbeck immediately opened fire and reported he had scored hits. It was then a second 110 attacking from starboard, high astern, his shells caused severe damage, setting the Port wing ablaze knocking out the intercom. Fire broke out in the fuselage and the Captain gave orders to bale out, flying about 10.000 feet, but I estimate that by the time we baled out we were flying at 2000 feet. I only had time to open my parachute, saw I was over the mouth of a river. The aircraft dived down and crashed into the sea just off shore. The wind carried me inland a short distance and I landed in a ploughed field. Landing hurt my back and had difficulty walking. I wandered about, then took shelter in a farm. I found out this was the home of Hensen family which is about 20 mile South of Kolding. They took me into their home gave me food and then put me in one of their famous feather beds. Later I learned where I had landed from maps shown to me. Apparently they had intended to get me out of the country to Sweden, but a search was on for the crew and shortly afterwards two plain-clothed police officers arrived and I was handed over. The Wehrmacht took me to barracks, where I was joined by Captain Cobbold who had been captured earlier. Then a third member arrived, Copley.'
Cobbold, Donaldson, Copley were taken to the German airfield near Flensburge where they were given dinner in the Officer's Mess. Here they met Lieutenant Schmitz who had shot them down. Another member of the crew, Sergeant David Young Niel, navigator, landed near Hejelsminde. He remained missing until Wednesday 1st Oct, when he was arrested as he attempted to cross a bridge. He was handed over to German Wehrmacht in Haderslev. Niel met the other three in POW camp Stalag Luft 3, Sagan, southeast of Berlin.
Three other members of the crew were never found, believed to have gone down with the Sterling Aircraft W7441. We will remember them.
- 1109112 Sergeant Edward Donald V Tovey, 2nd pilot,
- 1325233 Sergeant Eric James Rogers, Air Gunner ( nose turret gunner)
- 618116 Sergeant Charles Waghorn Fulbeck Air Gunner (rear gunner)
My mum at home with her 2 year old twins, and 6 months pregnant, had received a telegram to inform her that her husband was missing, believed dead. Happily soon after she was notified that he had been captured and was in a POW camp. She now knew he was alive but where and for how long. Her third child, a boy was born on Pearl Harbour Day, 7th December 1941. He did not see his dad until after the war; contact was made with my dad but it was very limited.
During my research I was contacted by Rob Thomas, researching information about his uncle Alex Donaldson. Alex Donaldson was in 7 Squadron with my dad, they were friends and worked together and were in POW camp for 3 1/2 years.
Rob contacted my brother to find out if Dad was still alive, and did we have any information about his Uncle Alex? My brother remembered Alex as being a friend of Dad's from the RAF days. Knowing I was trying to piece together Dad's war history, he gave Rob my phone number and since then we have been in regular contact on the internet, and telephone. We met in July 2005, he and his family visited me and we had a great day swapping information and putting it together. Alex had started a project in 1974 to gather details of his account and trace surviving crew members but sadly died two years later in his mid 50s.
Rob s interest has focused on the Stirling aircraft that crashed into the sea in Denmark. He had details left by his Uncle Alex about a man he had met at Farnborough Air Show called Soren Flensted whose hobby was researching RAF losses over Denmark. Rob contacted Soren who had lot of information about the Stirling, and a letter ( dated 1970) written to him by Alex about that fateful night.
Rob went to Denmark with a friend Andy to trace the story. They found a campsite near the area where Sgt Donaldson had landed in his parachute. It turned out that the farm on the campsite was the first building Sgt Donaldson had come to, where he had knocked on the window. Arrangements had been made to meet the Henson family and Asta, the daughter of Johannes Hensen, who was just 10 years old when Sgt Donaldson stayed the night in 1941. In Sgt Donaldson's written account of that night 'there was a young daughter at this house, I later learned her name was Asta Hensen. She got maps out and showed me where I had landed. I had a limited conversation with Asta and then fell to sleep.'
Rob and Andy were given a great welcome. Asta took Rob and Andy to her home where Sgt Donaldson had spent the night in a chicken shed -- the shed is still there. Rob & Andy then took a ride to Germany and visited Stalag Luft III near Berlin. Dad and Alex were held there for 6 months, leaving just before the great escape took place. Returning to Denmark Rob & Andy were contacted by the local diving club, who had located the wreck of the Stirling aircraft. They had salvaged some parts of the aircraft for them to see. Rob & Andy came back home to Derby, and decided they needed to learn to dive. This they did and in 2005 returned to Denmark with their own diving equipment.
Rob and Andy met with Carlsten Jenson, a founder member of the Middelfart diving Club, and custodian of the Stirling wreckage. Jensen knew exactly where to dive and had even salvaged some pieces of the wreck on previous dives. Rob, Andy, Jenson and other diving colleagues, sailed out to the wreck, about two hour trip. They headed down to the depths, the water not too bad, visibility good, could see four to five metres in front of them. Rob was ecstatic, he could not have got any closer to the story, and how pleased his uncle, and my dad would have been. What greeted Rob was hardly recognisable as an aircraft-- just a collection of bent and twisted metal. The wreckage was strewn across the sea bed over an area about the size of a football pitch. The aircraft was probably travelling at about 200 miles an hour when it hit the water. As custodian of the wreck Jenson has a say over who can dive it, and who can take pieces away. He allowed Rob to remove some objects, because he knew about the family connection. Although the wreckage has spent more than 60 years in salt water, some of the pieces salvaged were in good condition. One of the most interesting to Rob was a tail wheel. Another unusual find was a piece of twisted plastic, which appears to be part of the cockpit window.
Rob & Andy both felt mindful of the three RAF crew that had lost their lives in the aircraft, and the wreck was effectively a war grave. They were careful not to cause too much disturbance. 'Out of the three, one of the bodies was found on the beach by a local. It is now thought to be that of C W Fulbeck, the rear gunner. However the front gunner and co-pilot never got out of the Stirling before it crashed, so their remains could be buried there'. Jenson says that the echo-sounder had picked up something buried deep in the mud, it is thought to be the front end of the Stirling.
Rob, on his visit to me in 2005, brought parts of the Stirling for me to see. He is keeping them in water to stop the oxidising, and intends to clean them up and seal with a mixture of linseed oil and paraffin. Parts of the Stirling W4771 aircraft, preserved and held in Denmark, include oxygen cylinders, machine gun propeller blades, escape hatch and engine cylinders.
I have been doing research into my father's WW2 history for 7 years now and have lots of information. I have started a web site dedicated to my father www.copeydfm.co.uk
Sgt. William Edward Goodman 7 Squadron
I am the daughter of William 'Bill' Goodman who served in the RAF during the Second World War. He was returning with the crew from a mission to Emden when their plane, a Stirling, was shot down in the Friesland area and they made their way to Ferwerd.
In my father's own words: "Our intention was to approach the Friesian Islands about 13000 feet, but those atmospherics took our ‘lift’ away and we could get no more than 10000 feet. Even at that height we could be seen in silhouette from almost any direction, which was a potential hazard, and all crew members were asked to keep a very sharp look-out.
We were suddenly shaken by the impact of cannon shells striking into the starboard (right) wing which burst into flame. The shells had damaged the throttle and other controls to both starboard engines and the starboard aileron as well as the bomb doors on that side. Because those two engines could not be controlled by the throttles and lack of aileron control caused the plane to fly on a long circular track, which would bring us down in the middle of the North Sea, but Buck worked up a huge sweat with the exertion of holding some sort of course which would bring us over the Fresian Islands and, hopefully the Dutch coast before the plane exploded. We made it and he gave the order to abandon.
My ‘chute opened and I was drifting more or less serenely to earth, wondering how many had managed to get out when I was startled out of my thoughts by an aircraft which seemed as if had narrowly missed me. It was twin engined, a Messerschmitt 110 night fighter as I remember. I saw the flaming comet which ‘J - Johnnie’ had become curving round on its final course and I wondered how many had managed to escape, Buck especially in view of the way he had captained us and ensured our safety for so long. Suddenly the aircraft exploded into a fireball hurtling through the sky and towards earth. I looked downwards and saw I was falling towards water near the coast. It is so difficult to estimate height yet to fall when above water, and all of a sudden I felt my feet and legs fall into the water. I realised it was just a film of water over mud, thick, foul smelling mud which came up the length of my thighs. It was so thick it was impossible to wade through it, and the only way I could get to firm land was to stiffen my body, fall forwards and literally crawl out of it. An important prerequisite of successful evasion was to hide the parachute and harness, but I was unable to pull them along behind me, so I prayed they would sink into the mud and not be found.
I expected to see signs of block-houses, barbed wire entanglements - even patrolling sentries. But there was no sign of anything, which I could hardly believe. To get over the dyke I crawled on hands and knees, all the while watching out and listening for any sign of defence. On the other side I saw a number of drainage ditches with access paths alongside them, stretching inland at right angles to the dyke. Again I could not see any defences, but still could not believe it. I continued to crawl alongside one of the ditches and heard a sound ahead of me. I dropped into the ditch alongside the path, stopping every now and then and hardly daring to breathe, until I was abreast of a sound of careful movement. Suddenly a voice, in a hoarse whisper said ‘Is that you, Bill?’ It was a huge shock, but it turned out to be John Travis and Mac who had come looking for me in the hope my study of the maps could help to establish just where we were.
Unfortunately we had come off course with first, evasion tactics, then the attempt to get onto a course for home. These, together with the curve we had taken had destroyed my awareness of the final position, and it was too dark to consult the map. What knowledge I did have was enough for me to indicate in which direction we should walk. We stuck to the ditch side paths until we saw a large black motor car on a road ahead. We all dropped swiftly into the ditch until the car was out of sight. That it was large and black made us think it must have been an official car, probably the hated Gestapo, the secret police.
Morning was now with us, and it was becoming very light and indicating a beautiful summers’ day. For the time being we kept to those paths until we came to a house. In Holland at that time the house and barn was under one roof, and livestock was brought in during the winter, which became very cold with most of the waterways being frozen. Our continued hammering on the outside door eventually brought the farmer and his wife out. They were not able to speak English while we did not know their language either, but we were able, by sign language, to let them know we were RAF men who had been shot down. They were obviously unable to help us, but gave us bread and cheese and a drink before we left.
Now we were committed to using roads and we were surprised to see a man in uniform coming towards us. We judged him to be a postal worker, even on a Sunday, so we smartened ourselves up and fell into step. As we passed we gave him the typical salute of the Nazi Party and marched on, not looking back. We came round a curve to the right and saw a small square on our right, leading to a church. It was a fair assumption that the vicar’s education included English. There were about a dozen houses in the square, but the largest and nearest to the church must have been his. He answered the door and, yes, he had some knowledge of English. We explained our predicament and asked for his assistance. He asked whether we were Catholics, but none of us were. He said he was unable to help us, but advised us to give ourselves up for our own safety.
We continued into what we came to realise was a small town. Here my judgement of our position brought home to me that this was one of the stations of a railway, and we should wait for a slow goods train, preferably during the night, and ride the rods like American hoboes to bring us towards Amsterdam. The curving road next revealed a large building; obviously the Town Hall or similar. I led our little group along one side of the building where we came across a gap in the railings, with steps going down into the basement. A youth of my own age was leaning on the railings and we marched past, giving the Nazi salute and ‘Guten morgen’. He nodded ‘Good morning.’ That road led to the railway, but on the way passed a school with what looked like the head teacher’s house (it was too fine to be the caretaker’s). We must find an English speaker here. The head answered the door, followed by his wife and two daughters, all of whom spoke good English. They discussed our position, but had no knowledge where we might find help. I was reminded of a hint we had been given by the evader. He suggested getting in with a young lady as a couple were much less likely to be stopped by German police than a single person. My mind swung to this when I saw the elder daughter who, together with her mother and younger sister were in tears that they could not help.
Our next priority was to find a place to hide. The land was flat and there were no coppices in which we could hide. We already knew it was no use trying to hide in a barn, so we lay down in a hollow that was hidden from the road. After a little while we noticed a woman at a bedroom window. She was too interested for our liking. We were not far from the railway, but that would not have been a good place to be. It was now full daylight and people could be seen. We were quite desperate by now, when it occurred the youth at the Town Hall had actually said ‘Good morning’!
We almost ran back and he was still there, grinning, as he nodded us to follow him into the basement. He was the Mayor’s son and his father was just about the last still in post who was an Allies sympathiser, the others having been deposed by the Nazis and imprisoned. His father, Mr Esselink and the Chief of Police had gone to view our crashed aircraft, but should soon be back. The son brewed up [some tea] for us when we saw a large black car pull up outside the Town Hall. We thought it was the car seen earlier and the two men who alighted from it were Gestapo. The son introduced us and we were welcomed most warmly. Chief Smidt soon set about making known contacts, and the intention was to pass us on to another sympathiser. He made several sorties into town, coming back with suitable clothes and rations. We began to kit ourselves out for the journey, always bearing in mind the need to keep some of our uniforms so we should not be classed as spies if caught, and executed.
Chief Smidt arrived back from one of his sorties with a white face and terribly worried. He had been tipped off that one of the pro-Nazi persons in the town had told the Germans we were in the Town Hall and they were on the way to arrest us. The situation was fraught with danger for the good people of Ferwerd, where we were, so he had no option but to detain us. I suggested we assault him, take his revolver and run away. He said he could not allow that, as there might be reprisals against his town. We agreed that was likely, but the war would not go on much longer, thinking of the Thousand Bomber raids, so hurried up to conceal any help Smidt had tried to give us, and leaving them with all the currency from our escape kits. The German Army lorry pulled up outside the Town Hall and were led down into the basement by a huge officer holding what I have described as the largest hand held howitzer ever seen. Smidt, who had pulled out his revolver when he saw the Germans arrive put it back and ‘handed over his prisoners.’ He was able to say he had interrogated us and supplied him with our names, ranks and numbers. I think he was a good policeman to have rounded up the ‘arrest’ as he had done at no risk to the local populace.
Years later I learnt that he had remained as Chief throughout the war and was a respected man who tipped off the Resistance and stopped the Germans from finding out too much. Mr. Esselink was imprisoned during the war as a sympathiser, and resumed as Mayor after the war. His son was executed by the Germans after he had been arrested actually taking evaders ‘down the line’ and home to fly another day.
Thus ended our few hours of freedom before we ended up as Prisoners of War."
My father returned to the area in the late 1990s and contacted the family of the people who helped them... they also returned his flying helmet which they'd kept for all those years - which was amazing.
Sgt. Leslie Ernest James Davenport nav. 7 Squadron
I have done alot of research on my Grandad, Leslie Ernest James Davenport. He volunteered for the RAF and was posted with 7 Squadron. At the start of the war he was in training and flew with a Wellington Squadron in Lincolnshire but I have been unable to find any information on this.
I am aware he was with 7 squadron at Oakington in Cambridge. I am aware of 13 of his missions, then on the Sept 7th 1941 he was shot down over Recklinghausen Germany, after a bombing raid to Berlin.
His regular crew wee:
- F/O D.T Witt - passed away in 1963
- Sgt. L D A Bolton
- P/O D.K Deyell
- Sgt. A.E Burrows - KIA
- P/O J.L.A Mills - KIA
- Sgt J.T Prentice - Living in NZ
Other people I am aware he would have known were: A.H Piper (who passed away three years ago),D.H Williams, F.C Williams, K.O Blunden, E.S Baker, R. Blacklaw, Sgt. Hale, K. Huntley, J.T Copley,
The crew he was shot down with on the 7th September 1941, were all POWs.
- F/sgt Alick Yardley - Serv.No 748748 - taken to Stalag Luft 6
- F/O C.M Hall (RAAF) 402002 - still alive
- Sgt. J.H Boulton 742790
- Sgt J. Sutton 746720
- Sgt A. Speakman 551472
- Sgt D. Owens 528924
I have been to the records office at Kew and found all about the raids he was on I know he was based at Oakington with 7 Squadron. He saw active service with 13 missions that I am aware off between June 1941 - September 1941. I am aware of the POW camps he was in as well from Hyderkrug, Sagan, Lamsdorf, Thorn, Fallingbostel.
I am interested whether anyone has photo's of squadron 7 (Aircrew and planes) or knows if any of the above airmen are still alive and how I could trace them or their families if they have passed?
My grandad was a navigator and mainly a front gunner. He went on raids such as Cologne, Berlin, Dusseldorf, Bethune (France), La Pallice (France), Borkum Seaplane base, Hannover, Keil, Duisburg, Huls to name some.
I am aware of the many books which some are in my posession but I would love to know whether any of these veterans are still about. Any information will be greatly recieved. I would love to meet or speak with people that may know my grandad or served in the same squadron. My grandad has sadly passed away in 1988. I was unable to speak to him about the war. I do have photos in my family of some of my grandad and colleagues that maybe of interest to others. Look forward to any info that comes up.
Sgt William Richard John Craze 7 Squadron
My grandfather, William Craze, was in 7 Squadron and their Lancaster bomber JA685 was shot down on a mission to Leipzig on 4th Dec 1943. He was captured and sent to Stalag IVb and eventually released. His POW was No.267155. I Would love to hear of anyone else who might have known him.
Flt.Sgt. Henry Raymond Glover 7 Squadron (d.25th June 1943)
My brother, Henry Glover is mentioned in the "Memoirs of Group Captain T.G. Mahaddie: The story of a Pathfinder." The plane he was in was shot down over Holland and he is buried in Castricum Protestant Churchyard Noord, Netherlands. Plot J Coll.grave 6. His squadron flew Stirlings, from Oakington, Cambridgeshire.
Sqd.Ldr. Leonard James Saltmarsh DFC and bar. 7 Squadron
Leonard Saltmarsh served before and after the war in the Surrey Constabulary and I am working on the history of that force. In December 1942 he trained in a Tiger Moth and went on to fly Wellingtons and Lancasters with 7 Squadron, Pathfinders. He was awarded the DFC for actions on the 26th of August 1944 in a raid over Kiel. He flew 99 Operational sorties.
D.F.C. London Gazette 3 October 1944. The original recommendation states:
‘Flying Officer L. J. Saltmarsh has so far completed 17 successful sorties as Pilot and Captain of Lancaster aircraft, and has been most conspicuous at all times for his extremely high standard of courage and resoluteness. On two difficult occasions during daylight attacks on Vaires on 12 July 1944 and on Emieville on 18 August 1944, he observed a crippled bomber proceeding at a very reduced speed away from the target. On both occasions he dropped behind the main bomber stream in order to escort the damaged bomber safely back to England. On 15 August, during a daylight attack on the airfield at St. Trond, one of his engines became unserviceable on the way to the target and the propellor had to be feathered. But inspite of the fact that he was getting behind the main stream, owing to his reduced speed, he pressed on and bombed the target, and secured an aiming point photograph. On the way back from the target another engine became unserviceable but did not deter Flying Officer Saltmarsh from proceeding to and bombing an alternative airfield target with a bomb that had failed to be released over the primary target, and once more he secured an aiming point photograph. He eventually arrived safely over base and made a perfect two-engined landing. It was not until after he had landed that he reported the fact that two engines had become unserviceable during the sortie. This very gallant pilot is strongly recommended for the award of the Distinguished Flying Cross.’
Bar to D.F.C. London Gazette 16 November 1945. The original recommendation states:
‘This officer has completed 53 operational sorties, of which 28 have been carried out in the squadron, in the Path Finder Force, 18 of them as Captain of a Marker Crew. Flight Lieutenant Saltmarsh is an efficient and skilful pilot who has always shown a strong devotion to duty and a cheerful confidence which has always inspired a high standard of morale in his crew. He has always displayed exceptional fearlessness in the face of danger, complete disregard for personal safety and has pressed home his attacks against the enemy with the utmost determination.’
Leonard James Saltmarsh commenced pilot training at No. 31 E.F.T.S. at De Winton, Alberta in December 1942, and graduated from No. 34 E.F.S. at Medicine Hat in June 1943. Back in the U.K., he attended No. 11 A.F.U. at Shawbury, prior to joining No. 26 O.T.U. at Little Harwood in early January 1944, where he gained experience on Wellingtons, and then attended a conversion unit for Lancasters at Waterbeach, at which place he joined No. 514 Squadron that June.
Thus ensued his first tour of operations, commencing with a strike against L’Hey on the 23 June and ending with another against Emmerich on 7 October, the intervening period witnessing him attack numerous French targets in support of the Allied invasion, but also a number of heavily defended German targets, including Bremen, Dortmund, Saarbrucken, Stettin and Stuttgart. And as confirmed by the recommendation for his D.F.C. after 17 sorties, several of these trips were not without incident, his flying log book further stating that his Lancaster received flak damage during strikes against enemy panzers and transport at Villiers Bocage on 30 June and against a supply depot at Beauvoir on 2 July. Similarly, too, during a visit to Bremen on the night of 18-19 August.
In October 1944, Saltmarsh attended the Path Finder Force’s training centre at Warboys, as a result of which he was transferred to No. 7 (P.F.F.) Squadron at Oakington in the following month, flying his first such sortie on the night of the 11th-12th, against Dortmund. A daylight strike against enemy communications at Julich, in support of General Patton’s troops, followed on the 14th and a night operation to Sterkrade on the 21st, Saltmarsh’s flying log book again noting flak damage. Then on the 29th he flew as support aircraft to the Master Bomber on a raid to Dortmund, a role that he would fulfil with growing regularity over the coming months. Such heavily defended targets as Duisburg, Essen (twice) and Karlsruhe formed the backbone of his operational agenda in December, while January 1945 saw him attacking, among other locations, Hanover, Magdeburg, Munich and Stuttgart, his flying log book noting an encounter with a Ju. 88 on the Munich run. February witnessed his Lancaster carrying out strikes against Dortmund, Gelsenkirchen, Ludwigshaven and Pforzheim, in addition to participating in the famous “firestorm” raid on Dresden on the 13th, an action that Saltmarsh would robustly defend in years to come.
March saw him completing five more sorties to German targets, three of them in daylight, and April another four, two of these in daylight, including Bremen on the 21st, which latter operation marked the end of his operational tour. He did, however, fly three “Cook’s Tours” to the Rhur in May, and ended his career with an appointment in Transport Command in December 1945. Over and above all of this, however, it would appear that he flew 56 “unspecific” sorties of a secret nature, evidence for which is to be found in the following endorsement from “Bomber” Harris. He also flew: Diversions, experimentation of special equipment, including radar, photographic reconnaissance, these top secret sorties and others. In May 1945 he was selected and volunteered to form a new squadron for the continuation of hostilities against Japan.’
Any information on Mr Saltmarsh DFC and Bar would be appreciated
Thomas Reginald Nixon 7 Sqdn (d.20th Feb 1944)
Thomas Reginald Nixon, was killed on 20th February 1944. We wonder if he is our cousin, Reg who was from Smallthorne in Stoke on Trent? Can anyone help?
Can you help us to add to our records?
The names and stories on this website have been submitted by their relatives and friends. If your relations are not listed please add their names so that others can read about them
Did you or your relatives live through the Second World War? Do you have any photos, newspaper clippings, postcards or letters from that period? Have you researched the names on your local or war memorial? Were you or your relative evacuated? Did an air raid affect your area?
If so please let us know.
Help us to build a database of information on those who served both at home and abroad so that future generations may learn of their sacrifice.
Celebrate your own Family History
Celebrate by honouring members of your family who served in the Secomd World War both in the forces and at home. We love to hear about the soldiers, but also remember the many who served in support roles, nurses, doctors, land army, muntions workers etc.
Please use our Family History resources to find out more about your relatives. Then please send in a short article, with a photo if possible, so that they can be remembered on these pages.
Website and ALL Material © Copyright MIM to MMXI
- All Rights Reserved | 1 | 4 |
<urn:uuid:5bf54187-48ce-441b-9c3e-446ef363a468> | Maya History on Ambergris Caye, Belize
2 - Boca Ciega site. Probably the same age as Marco Gonzalez or slightly younger. Much of the site is buried beneath swamp and more modern refuse. No structures.
3 - At a small turnoff along the beach road by the sewage treatment plant. Chert tools and pottery sherds abundant, and some opened conch shells. No structures.
4, 7, and 9 - Un-named -- Bedrock highs on which Maya artifacts are present, including pottery sherds and some chert tools. No structures. Number 9 surrounded by dense jungle.
5 - San Pablo site. Along the road as you enter San Pablo Town from the beach road. A newly-bulldozed area of "black dirt" (anthrosol) with abundant pottery sherds, chert tools, and opened conch shells. Foundations of some structures (limestone slabs) visible.
6 - Un-named. On the road directly south of Sweet Basil restaurant. An area of "black dirt" (anthrosol) with pottery sherds and chert tools.
8 - Un-named -- In shallow water (1 ft) at the point of land. Pottery sherds and chert tools.
10 - Un-named -- on the eastern and western shores of the northern part of Blackadore Cave. Pottery sherds present locally.
11 - Santa Cruz Site. A short walk into thejungle. Limestone slab foundations, pottery sherds, and chert tools. Where the Maya used to burn limestone to make plaster.
12 - San Juan Site. On the northern tip of the San Juan Peninsula, the current location of the Bacalar Chico Marine Reserve research facility. Abundant pottery and chert.
13 - Chac Balam Site. The subject of the fictional action-suspense novel being written by S. J. Mazzullo.
The rulers of the Maya civilization were much like kings. Each city-state was controlled by an individual who may have inherited his office because of his ancestry. Today we know that much of the writings and art of the Maya on their monuments and murals are records of the history of the ruling lineages. Monuments, or stelae, record much of this history. The importance of a ruler's lineage is reflected in the fact that his relationships to past rulers and their accomplishments are described in such detail on these monuments.
Maya astronomers were more astrologers than scientists. Our own astronomers ruminate about ideas such as the origins of the universe and how the various celestial bodies are related to each other. Maya astronomers, on the other hand, were much more interested in the effects events in the night sky had upon their lives and especially the lives of their rulers. The great achievements of the Maya were made by detailed observations of generations of skywatchers without the advantages of modem technology. They observed the sky for thousands of years and with such precision that their measurements of celestial events, such as the synodic period of Venus and the Maya calendar, were more precise than ours. The application of astronomic principles can be seen in some of their great architecture. At Chichen Itza, they built a great observatory and designed the great pyramid, now called El Castillo, with 365 steps; one for each day of a year. El Castillo was designed with an orientation marking the vernal equinox as well. In Belize, Plaza B at Chan Chich and the main plaza at the site of Caracol may have been laid out to celebrate the summer and winter solstices.
For most people, the most startling and grand evidence we have today of the Maya civilization are their 'pyramids'. Unlike the Egyptian pyramids which were primarily tombs for the rulers, Maya pyramids were really simply huge substructural platforms. On the summit was usually a set of small rooms used by the religious elite. Occasionally, a ruler or another elite member of society was entombed in one of these buildings. However, their primary purpose was not for burial of the dead but for use by the living.
Archaeologists once thought of the Maya as having 'Old' and 'New' Empires with the 'Old Empire' based at the great center of Tikal and the 'New Empire' at Chichen Itzi. Today, we see the Maya civilization not as an empire controlled from a single city, but as a series of city-states. Despite decades of research, we are still unsure of the relationships among these large centers. Certainly they cooperated and competed with each other at different times. In the year 562 A.D., Lord Water, the ruler of Caracol, perhaps. the. largest ruin in Belize, conquered Tikal. For about 140 years, Tikal was dominated by Caracol. On the other hand, in 450 A.D. Stormy Sky, one of the greatest rulers of Tikal, may have installed his son, Six Sky, as the ruler of the city of Rio Azul. While it is clear that each city-state controlled the surrounding region, we do not yet understand the nature of that control. Were smaller ceremonial centers vassals of the kings or were they largely independent? Though more research answers old questions, it often raises new ones which cannot be solved for years. The relationships between large and small Maya sites is one of those questions.
It is generally believed that Ambergris Caye was part of the city-state of Chetumal. Though the name of the Mexican port on Chetumal Bay was changed from Payo Obispo to Chetumal, ancient Chetumal was almost certainly at the site of Santa Rita on the outskirts of Corozal Town. Excavations at Santa Rita have shown that the kingdom thrived into the 13th century, well after the general collapse of the Maya civilization. Here, again, the relationships among ancient communities become quite foggy. Were the smaller communities on Ambergris Caye closely integrated with Santa Rita or under Santa Rita's protection or were they independent from the mainland kingship?
Whatever the nature of the Maya political organization, trade and commerce are parts of the glue that holds all societies together. One of the major functions of all governments is to regulate the flow of goods among trading partners to the benefit of the state. Some archaeologists have thought that the origins of civilization itself may be perceived in the need for institutionalized trade and commerce. Once trade reaches the point where it is no longer simply casual exchange, institutions, sometimes private and sometimes public, must be established to cope with the flow of goods and resources. Certainly, the Maya were no exception to this.
It only takes a few minutes with a map of the Maya lowlands to understand the importance of coastal and riverine canoe trade. The rivers furthest north in Central America drain the Peten region of Guatemala into the Gulf of Mexico and the Caribbean Sea. On the Caribbean side, the Rio Hondo, the New River and their tributaries empty into Chetumal and Corozal Bays just behind Ambergris Caye. Ambergris acts as a barrier island to protect these bays.
The island also is a logical place for goods from the mainland to be brought and placed on board coastal trade canoes. In 1502, Columbus had encountered one of these canoes near the Bay Islands of Honduras. It was "an Indian canoe, as long as galley and eight foot in breadth, laden with wester commodities, which it likely belonged to the province of Yucatan" and had 25 people on board. It 1988 a French-Mexican team replicated the trip of such a canoe. In a single day, they canoed from the village of Xcalac, six miles north of Ambergris Caye, through the Bacalar Chico channel which separates Ambergris from the state of Quintana Roo, Mexico, and then south to the San Pedro lagoon. The canoe covered nearly 30 miles in a single day's travel. The 16 foot replica canoe could hold about a ton of goods and was powered by six paddlers. Boats of the size met by Columbus could have carried four to five times as much cargo.
The canoe team's success with a square-rigged sail is interesting as well. Archaeologists have debated whether the Maya used sails on their canoes. While some murals appear to show sails, not all authories agree. Having found that attaching a sail was so easy and useful, the French-Mexican team could not believe that the creators of huge cities, astronomy and calendrics would not have done the same.
Ambergris Caye's strategic location allowed main land goods to be trans-shipped to the coastal trade routes and vice versa. Mainland northern Belize was known to have produced important agricultural products such as corn and cacao. Cacao was used by the Maya as a medium of exchange, a sort of money. It was consumed as both a beverage and mole, a sauce used for ritual occasions. Indeed today's chocolate is little more than cacao and sugar.
In exchange, the mainland imported elite goods such as pottery, jade, obsidian and grinding stones made of volcanic basalt. Such goods were imports which aggrandized the elites. When such goods were not available in sufficient quantities, local counterparts or imitations were occasionally used. In North American society, Gucci bags and European luxury cars function in much the same manner. The Maya preferred basalt grinding stones over the local limestone as they did not fragment so easily. Indeed, it is simple to determine whether an ancient Maya ate corn ground by limestone or basalt. The limestone ground corn wore the people's teeth so badly that often just looking at the results is a painful experience. Other items, such as exotic pottery and jade, did not function better than their local counterparts. Instead, their owners were using them to mark their status. Not everyone today can afford diamond jewelry and, by the same token, not everyone in the past could afford jade ritual objects.
Other materials were also important as commodities rather than elite goods. For example, people who eat enough meat do not normally need to add salt to their diet. However, with the high populations of the mainland, it could not have been possible to supply adequate meat to everyone. Therefore, much salt was imported from the large salinas of northern Yucatan. Salt, then, was a commodity needed by everyone. Anyone who has experienced a salt deficiency in the tropics understands this very clearly. On northern Ambergris Caye, too, the lagoons provided salt until only a few decades ago. Although there was not enough to provide for the needs of the mainlanders, sea salt added an important facet to the economy of ancient Ambergris Caye.
Documents found recently in Seville indicate that during the 17th century, at least some Spaniards recognized the economic importance of the Ambergris salt sources. In 1565, the governor of Yucatan was petitioned for a concession to commercially harvest salt from lagoons near the Bay of Chetumal. It is not clear that the document refers to Ambergris Caye, but it is very likely. Nor do we know if the petitioner was successful.
Archaeologists deal with the chronology of the Maya
civilization by dividing it into the Preclassic, Classic
and Postclassic periods. These are further divided into
smaller periods such as the 'Early Classic'. During the
Preclassic, we see the development of the institutions,
architecture and kingdoms which are more apparent
later in the Classic period (300-900 A.D.). Across
Corozal Bay from Ambergris Caye, is the site of
Cerros near the mouth of the New River. During the
Late Preclassic period (300 B.C.-300 A.D.) at
Cerros, archaeologists have found evidence of large
scale construction of public architecture. By about
300 B.C., the Maya transformed Cerros from a small
coastal village into a large center for commerce and
regional authority. In a single enormous phase of
construction, they built a huge, 10-foot tall, platform
and on top of that, a series of pyramids. The largest of
these rises 90 feet above the top of the platform. By
this time, Cerros had become a city encircled by a
canal that drained water from the low lying
community and brought water to the fields
surrounding the city. Cerros seems to have specialized
in the control of trade in elite goods.
Unfortunately, it is nearly impossible to recover direct
evidence of trade in salt, cacao or corn because these
goods do not preserve as well as jade or basalt. To
complicate the situation further, ancient people had a
tendency to eat the evidence.
Unfortunately, it is nearly impossible to recover direct evidence of trade in salt, cacao or corn because these goods do not preserve as well as jade or basalt. To complicate the situation further, ancient people had a tendency to eat the evidence.
At Cerros, Structure 5 is a small building in the core zone of the site with stucco images of the city's rulers on its facade. Such facades have now been found at other sites including Lamanai and, perhaps La Milpa in Belize, Kinal and Uaxactim in Guatemala and Kohunlich in Mexico, just north of the Belize border. That Cerros participated in this region-wide celebration of the leaders of cities during the Late Preclassic indicates that the site did, indeed, hold an important place in the Maya world. While such cities dominated the Late Preclassic landscape, many much smaller communities also existed; ranging from small rural residences to more elite groups with formal courtyard residences. How these smaller communities were integrated with the great towns and cities we do not yet know. Surely, though, the kingdoms of the Classic period had their roots well established in the Late Preclassic.
It has been found that by the Early Classic period (300 A.D.-600 A.D.), the architectural, political, commercial and social patterns of the fluorescence of Maya civilization had become institutionalized in Maya cities throughout Belize and northern Guatemala. This period usually is considered to begin in 292 A.D. when the Maya first erected a carved monument, or stela, at Tikal. Stelae were used to announce and mark major events and the sometimes exaggerated accomplishments of the rulers.
However, it was during the Late Classic period (600-900 A.D.), that the architectural grandeur reached its height at Belizean cities such as Altun Ha, Caracol, La Milpa, Lamanai, Lubaantun, and Xunantunich. It was also during the Late Classic that the Maya expanded their political hierarchies. Rulers of even relatively small Late Classic sites,in the northwestern interior of Belize began to erect stelae. Populations had grown to such an extent that virtually none of the tropical rainforest could have remained. Erosion from hill slopes had begun to fill the canals that drained the agricultural fields of the bajos, or low-lying areas. Aside from the problems of coping with the needs of large populations, the Maya experienced intrusions by neighboring groups. While the elite rulers created a market for exotic goods and commodities such as salt became more and more necessary, they may have also created a separate merchant class over which they did not have authority. The picture of the Maya near the end of their civilization is one of a society under stress from many directions. The Classic Maya civilization 'collapsed' about 900 years after Christ. In the heartland of the southern lowlands, major construction ended, monuments were no longer built, populations declined and dispersed. However, not every city participated in the collapse. Some, like Lamanai, saw continued construction of large buildings for another century. Others, like Santa Rita at the site of today's Corozal Town, saw the continuation of ruling lineages for several centuries more. In general, though, cities were abandoned, ending the political and commercial success of the Classic period. Those that survived may have done so because their strategic locations enabled them to continue participating in the re-ordering of society. Ambergris Caye also played a very important role during this Terminal Classic period.
The final period, known as the Postclassic, dates from about 1000 A.D. until the arrival of the Spanish. There were several attempts at revitalizing the Maya civilization during this time, particularly at Chich6n Itza, Mayapan and Tiho, now Merida, in Yucatan, Mexico. The Postclassic Maya, possibly under the commercial control of the Putun Maya from Veracruz who operated from Chichen Itza, built a series of coastal ports, such as Isla Cerritos on the north coast of Yucatan, Tulum on the Caribbean coast and Ixpaatfin on the north side of Chetumal Bay. Tulum and Ixpaatun were built on bluffs and defended against land assault by walls around their core zones. The island of Cozumel functioned as both port and home of shrines for the goddess Ixchel to which the Maya made pilgrimages.
1. San Juan|
2. Chac Balam
3. Ek Luum
4. Burning Water
7. Robles Point
8. Basil Jones
9. Los Renegados
11. Tres Cocos|
13. San Pedro
14. Marco Gonzalez
15. San Pedro Lagoon
16. Laguna Frances
17. Santa Cruz
18. Punta Limon
Since the mid-1980s, two groups have been investigating the archaeology of Ambergris Caye. Elizabeth Graham and David Pendergast of the Royal Ontario Museum have worked at the site of Marco Gonzalez on the southern end of the island. The Ambergris Caye Archaeological Project, directed by Tom Guderjan of the University of Texas Institute of Texan Cultures at San Antonio, James Garber and David Glassman of Southwest Texas State University, and Herman Smith of the Corpus Christi Museum, spent four years finding and excavating ruins on the northern end. Together, we have been able to learn much of the Maya past at Ambergris. For more on Marco Gonzales, click here.
When the Ambergris Caye Project started, our interest was in learning about the maritime trade of the Maya which must have occurred on Ambergris. We knew that coastal canoe trade had been a primary means of moving goods from one place to another in the Maya world. During the Postclassic period, it was thought that this trade had reached its zenith, but we knew relatively little about the mechanics of maritime trade before that time.
Ambergris Caye presented both opportunities and impediments to this sort of commerce. While Maya canoes could easily ply the waters between the coast and the reef, they would need to travel outside of the reef at Rocky Point on Ambergris Caye. At Rocky Point, the reef converges with the island, creating a beautiful and dramatic place but also one that is treacherous for boats even today. The Maya solved this problem by digging a channel across the, peninsula at its narrowest point north of Rocky Point. Actually, Ambergris Caye is not an island but a part of Mexico's Xcalac Peninsula, separated from the rest of the land and the country of Mexico by an ancient canal. The Bacalar Chico canal was dug by 600 A.D., if not before. It was easier to dig a one mile long canal than it was to risk the loss of trade goods on the reef. Moreover, if the Maya could dig the much larger canal at Cerros by 300 B.C., certainly they could have dug the Bacalar Chico 900 years later.
The channel which separated Ambergris from Mexico, then, was a 'funnel' for trade canoes. These canoes carried everything from salt and food to jade, obsidian, and pottery along the Caribbean coast. Columbus even encountered one which may have been 50 feet long and carried 25 people. Since Ambergris protects Chetumal and Corozal Bays from the open sea, the cut on the northern end, the Bacalar Chico, must have been an important access for the maritime traders. We also knew that the small site of San Juan was on the back side of the island and would be the first place a canoe would encounter after passing through the Bacalar Chico canal.
Based upon the excavations and surveys of Ambergris since 1985, we can now reconstruct the archaeology of the island to a large degree. The first known occupation of the island was at Marco Gonzalez where Graham and Pendergast have found deeply buried pottery fragments from the Late Preclassic period. Unfortunately, we know little of these early people except that they were there. Perhaps, these early artifacts are the remains of a fishing station or outpost from a larger mainland community such as Cerros.
Much the same is true of the Early Classic period
which yields evidence of occupation at other sites,
Laguna Francos and Yalamha, on the island. Laguna
Francos is a large site by island standards and dates
mostly to the Late Classic. However, some Early
Classic pottery has been found in the backdirt of areas
disturbed by looters.
Yalamha, literally 'under the water', is entirely submerged in about two feet of water near the entrance to Laguna Francos. Only a scatter of Early Classic pottery and stone artifacts, covering about 300 square meters, can now be found. This small residence may be a clue to why we cannot readily find Early Classic materials on Ambergris. We already know that relative sea level has risen about 50 centimeters since 100 A.D. and one meter since 1 A.D.. Not only have world-wide sea levels risen, but the geologic plate of which Ambergris and northern Belize are parts is tilting downward. Residents of the town of Corozal complain about the loss of their front yards to the bay during their lifetimes and the site of Cerros was once on the New River. With the expansion of the bay, Cerros is now several miles from the river's mouth. Likewise, Yalamha has sunk. As it did, wave action eroded away all of the soil and left the potsherds and stone tools to gradually drop onto the bedrock below the place where they had been used 1,300 or more years ago. If Yalamha, a very small residence on the edge of the coast, is representative of the Early Classic pattern, then it is unlikely that many other sites from this period will be found. In some cases such as at Laguna Francos, larger buildings were placed on top of the Early Classic remains and protected them from erosion.
By the Late Classic period, the coastal margins of the island had become heavily settled. San Juan, Chac Balam, Ek Luum, Punta Limon, Santa Cruz, Laguna Francos, San Pedro Lagoon, Tres Cocos, Habaneros, Burning Water, the Hancock site and Marco Gonzalez were occupied. It is during the Late Classic period that we see evidence of the greatest coastal population and the greatest diversity of sites.
A number of sites on the west side of Ambergris, Laguna Francos, San Juan and Chac Balam among them, share a number of features. Each of them have formal arrangements of small mounds which supported pole and thatch buildings and small plazas between the mounds. Each of them also either have good natural harbors or artificial harbors. We have also found relatively large amounts of exotic pottery, obsidian, basalt and ornamental greenstone. All of these things reflect the long distance trade which created wealth on the island. Like traders everywhere, from antique dealers to drug dealers, these ancient traders skimmed some of the cream off the top. So, the communities involved with trade were able to enjoy the 'Waterford crystal' of the day, so to speak. They also emulated the architecture of the mainland. This was both a reflection of their affluence and their need for formal and appropriate space in which to conduct business. The small plazas among their buildings were, in effect, offices and markets, where they could undertake their business. And, of course, coastal trade could not be serviced without a place to harbor boats. Indeed, most of these sites are located where natural harbors exist. Although, at Chac Balam, the Maya had to dig a harbor over 100 meters long so that boats could be secured.
San Juan juts into the water of the back side of Ambergris, just where boats could easily see it as they completed the trip through the Bacalar Chico canal. At San Juan, we found pottery from the Yucatan, the south coast of Belize, as well as the Peten region of Guatemala and Campeche. We also found several vessels of Tohil Plumbate pottery which was made only on the Pacific Coast of Guatemala. There was gray obsidian from highland Guatemala and a surprising amount of green obsidian from Pachuca in central Mexico. Green obsidian, like all green stone, was very much valued by the Maya and was traded by sea around the Yucatan. It was said that "...because its appearafice is like a green Quetzal feather, it is precious, esteemed, valuable; it is worthy of being cherished; it merits storing; it is desirable, worthy of envy; it merits esteem and is praiseworthy."
In large ruins of northern Belize, about one percent of the obsidian is from Pachuca. At San Juan, well over 15 percent was from Pachuca. The people of San Juan also had access to some jade objects and basalt grinding tools from the Guatemalan highlands and had ground stone artifacts from Belize's Maya Mountains.
Much of these materials were found in garbage deposits; others were found in burials. The Maya tended to bury people under their homes,- or if they were rulers, perhaps in a tomb inside a temple. One burial from San Juan was of an infant child about three years old. The child was buried with a Late Classic cylinder vase and the pedestal base of a vessel dated several hundred years later. The cylinder vase had probably been a family heirloom. A male about 40 years old was interred with a Tohil Plumbate jar (the San Juan jar) and two other ceramic vessels, a carved jade face, a tubular jade bead, two stone knives and a stone flake, a carved shell bead, four deer antlers, and an awl made from a deer bone, and a manatee bone. This man was clearly important and may have been an expert stone worker; the deer antlers were exactly what would be used to fabricate stone tools.
One burial was not identified until we returned to our laboratories. We had excavated a burial with an offering inside two large dishes, one placed upsidedown on the other so that they were resting lip to lip. Inside, we found what we first believed to befish bones. It was only later that we found that this was the remains of a newborn or stillborn child. The reverence which the Maya showed for such a young life is indicative of their compassion.
Especially exciting among the artifacts at San Juan was the very high percentage of green obsidian recovered. Green obsidian comes only from the Pachuca area of central Mexico and would need to be traded all the way around the Yucatan Peninsula to reach San Juan. Often, a few pieces of green obsidian are recovered in important tombs at main land sites. However, we found that almost 14 percent of the obsidian from San Juan was green Pachuca material. X-Ray Fluorescence of the gray obsidians also revealed that other pieces came from Michoacan and another still unknown Mexican source. Given San Juan's setting at the west side of the Bacalar Chico Canal, it is entirely possible that the green obsidian bound for northern Belize first stopped at San Juan.
The architecture of San Juan is also indicative of its role in the trade network. While no formal plaza exists at San Juan, buildings with mixed residential and administrative functions were found. Structure 3 is a building with a two-tiered round substructure and a small staircase leading to a round pole and thatch building on top. The entire arrangement sat on top of a rectangular platform. Round buildings like Structure 3 at San Juan are associated with influence and trade from northern Yucatan.
Another building, Structure 4, was very small and may have been used for storage. Its prominent location, adjacent to Structure 3, may indicate that it was used for the storage of valuable trade goods. Along one of its walls, we found a pile of notched potsherds. These are believed to have been used for weights on fishing nets and are still in use by contemporary fishermen. Perhaps someone had hung their net on the side of the building. Then the net fell to the ground, not to be disturbed again for a thousand years.
The latest date we have from San Juan is about 1,000 A.D., obtained from under a small house floor on the flanks of the main platform. It seems that after the main sector had been built, small residences were constructed along the water's edge. These Terminal Classic people may have continued to use San Juan as a trade point, but we cannot be sure. Centuries later San Juan was visited by both Spanish and British sailors, as evidenced by the historic bottles and coins found there.
Chac Balam was another important Maya community of the Late and Terminal Classic periods. The site is located between San Juan and the Bacalar Chico canal with a man made harbor dug to it. The site itself is rather small, covering an area of about 150 meters by 50 meters. The central portion of Chac Balam. is a formal plaza about 25 meters square with buildings arranged around it on platforms which vary in height from three to six meters tall. Such platforms on Ambergris are not usually built of stone, but of marl, a clay that results from limestone crumbling into small clay particles. At Chac Balam, unlike most other sites, the marl platforms were faced with cut limestone to produce a facade similar to mainland architecture. On top of these platforms would have been pole and thatch houses. Chac Balam yielded an artifact inventory similar to San Juan's. Polychrome pottery from the mainland and trade goods from the north and south were recovered as well as artifacts of obsidian, basalt, slate and green-stones such as jade. To bury one important adult male, a new addition was built on one of the platforms on top of the body. The man was interred with a set of jade earspools, a bone bloodletting tool and receptacle, and a fluted polychrome vessel probably made at Altun Ha. Underneath the burial was a cache that included a finely made black plate and two trickle-ware plates imported from the north. The bloodletting tool and blood receptacle were especially interesting as bloodletting was a ritual of the Maya elite. The evidence of such a ritual associated with this burial indicates that this man may have been the ruler of Chac Balam.
After the main occupation of Chac Balam, perhaps even after the site had been abandoned, a large number of very shallow burials were placed at the site. It is possible that Chac Balam was the ancestral home of people who at that time lived elsewhere and who returned to inter their dead.
Another kind of artifact found at Chac Balam and many other coastal sites, including Ek Luum, San Juan and Marco Gonzalez, is a pottery type named Coconut Walk Unslipped. Coconut Walk apparently was only made as very thin, shallow dishes about 40-50 cms in diameter. According to Elizabeth Graham and David Pendergast these were used for making salt by evaporating sea water. At Chac Balam, two plaster altars were excavated which were covered with Coconut Walk sherds. Also, Mound 1 at Ek Luum was a ritual location or temple of some sort. Enormous numbers of Coconut Walk sherds were also found there. With Coconut Walk found in such ritual contexts at Chac Balam and Ek Luum, we see reason to think that an alternative use for this pottery must exist. Finally, at least one lagoon on northern Ambergris, near San Juan and Chac Balam, produced salt well into this century. Since then, rising sea levels have made that lagoon no longer exploitable. It seems unlikely that, with a large local salt source nearby, the- residents of these sites would devote much energy to making salt by evaporating sea water.
David Glassman has examined the skeletal remains of the burials from San Juan and Chac Balam. They appear to have had some of the problems shared by Mayas and other pre-industrial societies such as high infant mortality. However, they clearly were a very healthy population with very good nutrition and relatively little disease.
In general, the Late Classic sites on the west, or leeward, side of Ambergris Caye give us the impression of a successful and wealthy society. Trade goods from all over the Maya world were available to these people as were the easily accessible maritime resources for food. No doubt they had an enviable standard of living.
On the front side of the island, things were much different. Many small sites were found which do not have mounds. Where we excavated these sites, like the Franco site, we did not find plazas and large platforms for buildings. Instead there were only the buried, thin plaster floors left from small perishable houses. Only a very few people could have lived at each of these sites, perhaps a family or two. The pottery was very crudely made and no imported artifacts were found. While people in the wealthier sites used obsidian and high quality stone tools imported from the mainland, the people at the small windward side sites often made tools of shell. These people probably made a living by fishing. Although they had few 'expensive' possessions, their lives were probably quite comfortable. It appears that the wealth of the sea was more than enough to satisfy their needs.
Other larger communities dating to the Late Classic period are also found on the windward side of Ambergris Caye. Typically, these sites cover several hundred square meters and some of them ' such as Mata Grande, Tres Cocos and Habaneros, benefited from long distance trade. While we have not excavated at these sites, surface collections yielded sherds of high quality, imported pottery and artifacts such as basalt grinding stones and obsidian blades from remote areas. Generally these sites do not have monumental architecture, such as structures built on top of platforms as temples. Nevertheless, the inhabitants seem to have thrived in a mixed fishing and trading economy.
Ek Luum is one of the largest of these sites. Located about 250 meters from the present beach, Ek Luum is typical of the windward sites in that it is far enough away from the beach for safety, yet close enough for easy access. It was also built on the shore of the Laguna Cantena within sight of communities like Burning Water and Chac Balam. The major portion of the site is a raised area about 140 meters by 120 meters, elevated about 2.5 meters above the surrounding terrain. Excavations have demonstrated that this area grew gradually by the construction of houses upon the remains of older houses for several hundred years. This is unlike Chac Balam and San Juan where relatively large scale construction projects created the sites as we see them today. Unlike other windward side sites, Ek Luum includes monumental architecture as well as residences. Mound 1 is an, earthen mound approximately four meters in height overlooking the Laguna Cantena which was capped by a series of marl floors. Thousands of sherds of Coconut Walk pottery were recovered from this structure. These probably are the result of ritual activity at Mound 1. Mound 2 is also about four meters above the surrounding area, but unlike Mound 1, residential debris was found there. Perhaps Mound 2 was the residence of the priest and Mound 1 was the location of public ceremonies.
While some exotic goods were found at Ek Luum, it was with much less frequency than at San Juan and Chac Balam. Several obsidian blades were found inside a small building much like Structure 4 at San Juan, which may have also been a storage building. Also, a cache of two pottery vessels was found. In general, though, the people of Ek Luum, did not have access to the large quantities of exotic ;materials that their neighbors did. They used less elaborate pottery and more commonly used shell tools instead of tools of the high quality stone which came to the island from the 'chert-bearing zone' of northern Belize near Orange Walk Town.
The Late Classic period seems to have been a time ,of
much population expansion on Ambergris, as it ,was
elsewhere. Numerous small sites on the 'island's
lagoons were occupied. On the north side of San
Pedro Lagoon, at least seven small occupational areas
have been identified. Probably, many more exist, but intense and detailed surveys would be required to have any idea of just how many such sites do exist on the island.
The Marco Gonzalez site may be the largest ruin on Ambergris Caye. Located about two miles south of the town of San Pedro, it covers an area of about 355 meters by 155 meters and has at least 53 buildings with a central plaza and several small courtyard groupings. The site's excavators believe that during the Early Classic period, the economy of Marco Gonzalez was based upon exploitation of the vast marine resources which the Caribbean provides. The community saw continued success through the Late Classic period as well. However, during the Postclassic period, when other sites on Ambergris were being abandoned, Marco Gonzalez underwent large scale expansion. Nearly every one of the structures were added to or used at this time.
Even more importantly, it was at this time that Marco Gonzalez became a trade outpost for the great mainland center of Lamanai. Lamanai, the second largest site in Belize, survived and thrived through the 'Maya collapse'. It may well have done so because of its strategic location on the New River Lagoon. While other great centers, such as La Milpa and Rio Azul, were abandoned, Lamanai had the opportunity to become a funnel for forest and agricultural goods of the eastern Peten and northern Belize regions to enter the maritime trade system. Goods were shipped from Lamanai to Marco Gonzalez where they were trans- shipped onto coastal canoes in exchange for other commodities and exotic goods. Both Lamanai and Marco Gonzalez flourished into the 13th century, well after the end of the Maya civilization in much of the rest of Belize. It is even possible that the community of Marco Gonzalez persisted until the mid-15th century.
Like the sites of Chac Balam and San Juan, Marco Gonzalez has yielded an interesting array of exotic artifacts. These include green and gray obsidian as well as pottery from Yucatan and elsewhere. One burial included a Tohil Plumbate pot which was virtually identical to one excavated at San Juan. The similarity is so close that We suspect that they were made by the same potter. What makes this so remarkable is that Plumbate pottery was made only in one small region of Guatemala's northwest Pacific coast. It is usually found in very small quantities as burial offerings or in other ritually significant contexts. The four Plumbate vessels known to have been found on Ambergris Caye are actually a surprisingly large number.
Another site, Los Renegados physically resembles the small marine oriented sites like the Franco site. However, looks are often deceiving. Los Renegados has no mounds and appears to be only a black dirt deposit. The precise size of the site is not known, probably only a few hundred square meters. The only evidence of houses at the site are the very thin, buried plaster floors found in test excavations. However, Los Renegados yielded a collection of nearly 200 obsidian blades, all from sources in the Guatemalan highlands. Almost all of the pottery from the site is of the Paxcaman series, possibly from the Tulum region. This surprising site indicates that Postclassic activity was certainly occurring on the island, but we have no way of knowing much about the nature of that activity.
The other site on Ambergris Caye which probably dates to the Postclassic period is Basil Jones. South of Rocky Point at the interior of the island is Ambergris Caye's widest section. It is much higher and supports a much different vegetation pattern. Here there is a series of crudely built stone mounds, now nearly destroyed by looting and, perhaps, by the work of Ambergris's first archaeologist, Thomas Gann. This British physician, explorer and archaeologist is famous for probing into nearly every site in northern Belize. It was he who discovered the important murals at Santa Rita and first excavated at Lubaantun. En route to Belize City by boat in the 1920s, Gann made a stop on Ambergris to excavate the 'largest mound' on the northern part of the island. He wrote that:
"several burial mounds were excavated, in which bones were disturbed, the skeletons lying on their backs... and food offerings in pottery receptacles provided for their journey into the next world, indicating the usual method of burial among these people. A second mound was excavated, which had been built over the ruins of a small stone chamber. Nothing was found within it, but beneath the centre was discovered a round saucer for burning incense, with a long handle, and a curious figurine in clay, whose face was covered by a peculiar grilled arrangement, more resembling a baseball mask than anything else, which was studded with rossettes." (Gann, 1926: 60).
The ceramic incensario which Gann describes is clearly from the Postclassic period and the mounds are probably those of Basil Jones.
An important feature of Basil Jones is the network of stone walls which apparently surrounded privately owned fields. Such wall systems have also been found on Cozumel Island and several other locations along the coast of the state of Quintana Roo, Mexico, north of Ambergris. Unlike much of the rest of the island, the higher northern interior has reasonably fertile land which could have been used for farming. If, as we believe, these fields are Postclassic, then Basil Jones represents a major shift in the island economy, away from trading and fishing to an agricultural base.
This shift may have been forced upon the residents of Ambergris Caye to some degree. The late Eric Thompson put forth the idea that a Maya group, the Putun from the Tabasco coast, began to seize control of Maya trade routes at the end of the Classic period. They may well have taken over existing trade routes and consolidated them into one great network. It is also true that Postclassic ports such as Tulum and Ixpaatun were spread further apart than the Classic ports before them. The later ports were also larger and defended from land attacks. This is just what might occur if a large and powerful group forced local 'Mom-and-Pop' operations out of business. Then, the Putun would have needed to carefully locate and defend their outposts from pirates and raiders from competing groups. A French-Mexican expedition led by Michel Piessel in 1988 was able to show that coastal traders could easily travel 30 or more miles in a single day. Certainly, it was not necessary to have ports located every four or five miles as they were on Ambergris in the Classic period.
Whatever the cause, Ambergris Caye in the Postclassic period no longer seems to have had the important role in maritime trade which it enjoyed in the Classic period. The shift to a farming economy at Basil Jones on the interior of the island at the same time coastal margin sites, such as Chac Balam and San Juan, are abandoned may have occurred because of this 'horizontal integration' of the coastal trade system by the Putun Maya. It has been thought that the Maya had abandoned Ambergris Caye long before Europeans arrived.
However, a tantalizing clue to the contrary, a map recently located in the Spanish archives at Seville appears to indicate a settlement on northern Ambergris Caye in the general area of Basil Jones. Other similar notations on the same map are of colonial missions. Of course, where there are missions, there must be Maya for the friars to missionize. Perhaps then, Basil Jones holds the key to understanding the last chapter of the Maya on Ambergris Caye before modern times.
A thousand years ago, the maritime traders of
Ambergris had built a large and affluent society.
Today, San Pedro and Ambergris Caye are growing
again. We do not know how many people lived on the
island in the past, but certainly it was more than today.
Though it is difficult to know how these people lived, it
is clear that the resources of the sea and the wealth
from maritime commerce allowed them to live very
For the Royal Ontario Museum website, click here for information on digs in Marco Gonzales and on Ambergris Caye. David Pendergast, who is now the director of the Museum, was in charge of excavations at Lamanai and Altun Ha in the 70's & 80's, and the Museum also sponsored work at Ambergris Caye and in Cuba.
1989 "Brief Synthesis of Coastal Site Data from Colson Point, Placencia and Marco Gonzalez, Belize." in Coastal Maya Trade and Exchange. Edited by Heather McKillop and Paul Healy. Occasional Papers in Anthropology. Trent University; Peterborough, Ontario.
Guderjan, Thomas H., James F. Garber and Herman A. Smith
1989 "Maritime Trade on Ambergris Caye, Belize." in Coastal Maya Trade and Exchange. Edited by Heather McKillop and Paul Healy. Occasional Papers in Anthropology. Trent University; Peterborough, Ontario.
1988 "San Juan; A Mayan Trans-Shipment Point on Ambergris Caye, Belize." Mexicon X:2:35-37
Gudedan, Thomas H., James F. Garber, Herman A. Smith, Fred Stross, Helen Michel and Frank Asaro
1989 "Maya Maritime Trade and Sources of Obsidian at San Juan, Ambergris Caye, Belize."Journal of Field Archaeology 16:3:363-369.
Maxicon. Leo P. Biese. Sarnual Lothrop Library Trust, Box 79, Murray Hill Road, Murray, NH 03243.
Ancient Mesoamerica. Dr. Stephen Houston. Department of Anthropology, Vanderbilt University, Nashville, Tennessee 37235.
Journal of Field Archaeology. Boston University, 675 Commonwealth Drive, Boston, MA 02215.
Texas Archaeological Society. Center for Archaeological Research, University of Texas at San Antonio, San Antonio, TX 78249.
Belizean Studies. P.O. Box 548, St. John's College, Belize City, Belize, C.A..
The Maya on Ambergris Caye
by Thomas H. Guderjan
Courtesy of Cubola Productions, Belize | 1 | 6 |
<urn:uuid:6dd07fb1-797a-4edd-9cbf-624dffb15919> | Modernism, in its broadest definition, is modern thought, character, or practice. More specifically, the term describes the modernist movement in the arts, its set of cultural tendencies and associated cultural movements, originally arising from wide-scale and far-reaching changes to Western society in the late 19th and early 20th centuries. In particular the development of modern industrial societies and the rapid growth of cities, followed then by the horror of World War I, were among the factors that shaped Modernism. Related terms are modern, modernist, contemporary, and postmodern.
In art, Modernism explicitly rejects the ideology of realism and makes use of the works of the past, through the application of reprise, incorporation, rewriting, recapitulation, revision and parody in new forms. Modernism also rejects the lingering certainty of Enlightenment thinking, as well as the idea of a compassionate, all-powerful Creator.
In general, the term Modernism encompasses the activities and output of those who felt the "traditional" forms of art, architecture, literature, religious faith, social organization and daily life were becoming outdated in the new economic, social, and political conditions of an emerging fully industrialized world. The poet Ezra Pound's 1934 injunction to "Make it new!" was paradigmatic of the movement's approach towards the obsolete. Another paradigmatic exhortation was articulated by philosopher and composer Theodor Adorno, who, in the 1940s, challenged conventional surface coherence, and appearance of harmony typical of the rationality of Enlightenment thinking. A salient characteristic of Modernism is self-consciousness. This self-consciousness often led to experiments with form and work that draws attention to the processes and materials used (and to the further tendency of abstraction).
The modernist movement, at the beginning of the 20th century, marked the first time that the term avant-garde, with which the movement was labeled until the word "modernism" prevailed, was used for the arts (rather than in its original military and political context).
Some commentators define Modernism as a socially progressive trend of thought that affirms the power of human beings to create, improve and reshape their environment with the aid of practical experimentation, scientific knowledge, or technology. From this perspective, Modernism encouraged the re-examination of every aspect of existence, from commerce to philosophy, with the goal of finding that which was 'holding back' progress, and replacing it with new ways of reaching the same end. Others focus on Modernism as an aesthetic introspection. This facilitates consideration of specific reactions to the use of technology in the First World War, and anti-technological and nihilistic aspects of the works of diverse thinkers and artists spanning the period from Friedrich Nietzsche (1844–1900) to Samuel Beckett (1906–1989).
Beginnings: the 19th century
||This section needs additional citations for verification. (January 2013)|
In the late 18th and early 19th centuries, Romanticism developed as a revolt against the effects of the Industrial Revolution and bourgeois values, while emphasizing individual, subjective experience, the sublime, and the supremacy of "Nature", as subjects for art, and revolutionary, or radical extensions of expression, and individual liberty. While J. M. W. Turner (1775-1851), one the greatest landscape painters of the 19th century, was a member of the Romantic movement, as "a pioneer in the study of light, colour, and atmosphere", he "anticipated the French Impressionists" and therefore Modernism "in breaking down conventional formulas of representation; [though] unlike them, he believed that his works should always express significant historical, mythological, literary, or other narrative themes".
By mid-century, however, a synthesis of the ideas of Romanticism with more stable political ideas had emerged, partly in reaction to the failed Romantic and democratic Revolutions of 1848. It was exemplified by Otto von Bismarck's Realpolitik and by the "practical" philosophical ideas of Auguste Comte's positivism. This stabilizing synthesis of the Realist political and Romantic aesthetic ideology, was called by various names: in Great Britain it is the Victorian era. Central to this synthesis were common assumptions and institutional frames of reference, including the religious norms found in Christianity, scientific norms found in classical physics, as well as the idea that the depiction of external reality from an objective standpoint was not only possible but desirable. Cultural critics and historians called this ideology realism, although this term is not universal. In philosophy, the rationalist, materialist and positivist movements established the primacy of reason.
Against this current, however, ran another series of ideas, some of which were a direct continuation of Romantic schools of thought. Amongst those who followed these ideas were the English poets and painters that constituted the Pre-Raphaelite Brotherhood, who, from about 1850, opposed the dominant trend of industrial Victorian England, because of their "opposition to technical skill without inspiration" They were influenced by the writings of the art critic John Ruskin (1819–1900), who had strong feelings about the role of art in helping to improve the lives of the urban working classes, in the rapidly expanding industrial cities of Britain. Clement Greenberg describes the Pre-Raphaelite Brotherhood as proto-Modernists: "There the proto-Modernists were, of all people, the pre-Raphaelites (and even before them, as proto-proto-Modernists, the German Nazarenes. The Pre-Raphaelites actually foretold Manet (with whom Modernist painting most definitely begins). They acted on a dissatisfaction with painting as practiced in their time, holding that its realism wasn't truthful enough". Rationalism has also had other opponents later in the 19th century. In particular, reaction to the philosopher Hegel's (1770–1831) ) dialectic view of civilization and history from Søren Kierkegaard (1813–55) and later in the 19th century Friedrich Nietzsche (1844-1913). Together these different reactions, challenged the comforting ideas of certainty derived from a belief in civilization, history, or pure reason.
Indeed from the 1870s onward, the idea that history and civilization were inherently progressive, and that progress was always good (and had no sharp breaks), came under increasing attack. The composer Richard Wagner (1813–83) (Der Ring des Nibelungen, 1853–70) and playwright Henrik Ibsen (1828–1906) were prominent in their critiques of contemporary civilization and for warnings that accelerating "progress" would lead to the creation of individuals detached from social values and isolated from their fellow men. Arguments arose that the values of the artist and those of society were not merely different, but that Society was antithetical to Progress, and could not move forward in its present form. In addition the philosopher Schopenhauer (1788–1860) (The World as Will and Idea, 1819) called into question the previous optimism, and his ideas had an important influence on later thinkers, including Nietzsche.
Two of the most significant thinkers of the period were biologist Charles Darwin (1809–82), author of On the Origin of Species by Means of Natural Selection (1859), and political scientist Karl Marx (1818–83), author of Das Kapital (1867). Darwin's theory of evolution by natural selection undermined religious certainty and the idea of human uniqueness. In particular, the notion that human beings were driven by the same impulses as "lower animals" proved to be difficult to reconcile with the idea of an ennobling spirituality. Karl Marx argued that there were fundamental contradictions within the capitalist system, and that the workers were anything but free. Both thinkers were major influences on the development of modernism. This is not to say that all modernists, or modernist movements rejected either religion, or all aspects of Enlightenment thought, rather that Modernism questioned the axioms of the previous age.
Historians, and writers in different disciplines, have suggested various dates as starting points for modernism. William Everdell, for example, has argued that Modernism began in the 1870s, when metaphorical (or ontological) continuity began to yield to the discrete with mathematician Richard Dedekind's (1831–1916) Dedekind cut, and Ludwig Boltzmann's (1844–1906) statistical thermodynamics. Everdell also thinks Modernism in painting began in 1885-86 with Seurat's Divisionism, the "dots" used to paint "A Sunday Afternoon on the Island of La Grande Jatte." On the other hand Clement Greenberg called Immanuel Kant (1724–1804) "the first real Modernist", though he also wrote, "What can be safely called Modernism emerged in the middle of the last century—and rather locally, in France, with Baudelaire in literature and Manet in painting, and perhaps with Flaubert, too, in prose fiction. (It was a while later, and not so locally, that Modernism appeared in music and architecture)." And cabaret, which gave birth to so many of the arts of modernism, may be said to have begun in France in 1881 with the opening of the Black Cat in Montmartre, the beginning of the ironic monologue, and the founding of the Society of Incoherent Arts.
The beginning of the 20th century marked the first time a movement in the arts was described as "avant-garde"—a term previously used in military and political contexts, which remained to describe movements which identify themselves as attempting to overthrow some aspect of tradition or the status quo. Much later Surrealism gained the fame among the public of being the most extreme form of modernism, or "the avant-garde of modernism".
Separately, in the arts and letters, two important approaches developed in France. The first was impressionism, a school of painting that initially focused on work done, not in studios, but outdoors (en plein air). Impressionist paintings demonstrated that human beings do not see objects, but instead see light itself. The school gathered adherents despite internal divisions among its leading practitioners, and became increasingly influential. Initially rejected from the most important commercial show of the time, the government-sponsored Paris Salon, the Impressionists organized yearly group exhibitions in commercial venues during the 1870s and 1880s, timing them to coincide with the official Salon. A significant event of 1863 was the Salon des Refusés, created by Emperor Napoleon III to display all of the paintings rejected by the Paris Salon. While most were in standard styles, but by inferior artists, the work of Manet attracted tremendous attention, and opened commercial doors to the movement.
The second French school was Symbolism, which literary historians see beginning with the poet Charles Baudelaire (1821–67) (Les fleurs du mal, 1857), and including the later poets, Arthur Rimbaud (1854–91), Paul Verlaine (1844–96), Stéphane Mallarmé (1842–98), and Paul Valéry (1871–1945). The symbolists "stressed the priority of suggestion and evocation over direct description and explicit analogy," and were especially interested in "the musical properties of language."
The economic upheaval of the late nineteenth century became the basis to argue for a radically different kind of art and thinking. Influential innovations included steam-powered industrialization, and especially the development of railways, starting in Britain in the 1830s, and the subsequent advancements in physics, engineering and architecture associated with this. A major 19th-century engineering achievement was The Crystal Palace, the huge cast-iron and plate glass exhibition hall built for The Great Exhibition of 1851 in London. Glass and iron were used in a similar monumental style in the construction of major railway terminals in London, such as Paddington Station (1854) and King's Cross Station (1852). These technological advances led to the building of later structures like the Brooklyn Bridge (1883) and the Eiffel Tower (1889). The latter broke all previous limitations on how tall man-made objects could be. These engineering marvels radically altered the 19th-century urban environment and the daily lives of people.
The human misery of crowded industrial cities, as well as, on the other hand, the new possibilities created by science, brought changes that shook European civilization, which had, until then, regarded itself as having a continuous and progressive line of development from the Renaissance. Furthermore the human experience of time itself was altered, with the development of electric telegraph from 1837, and the adoption of standard time by British railway companies from 1845, and in the rest of the world over the next fifty years.
The changes that took place at the beginning of the 20th-century are emphasized by the fact that many modern disciplines, including sciences such as physics, mathematics, neuroscience and economics, and arts such as ballet and architecture, call their pre-20th century forms classical.
Late 19th to early 20th centuries
||This section needs additional citations for verification. (February 2012)|
In the 1880s, a strand of thinking began to assert that it was necessary to push aside previous norms entirely, instead of merely revising past knowledge in light of contemporary techniques. The growing movement in art paralleled developments in physics, such as Einstein's Special Theory of Relativity (1905); innovations in industry, such as the development of the internal combustion engine; and the increased role of the social sciences in public policy. Indeed it was argued[who?] that, if the nature of reality itself was in question, and if previous restrictions which had been in place around human activity were dissolving, then art, too, would have to radically change. Thus, in the first twenty years of the 20th century many writers, thinkers, and artists broke with the traditional means of organizing literature, painting, and music; the results were abstract art, atonal music, and the stream of consciousness technique in the novel.
Influential in the early days of Modernism were the theories of Sigmund Freud (1856–1939), and Ernst Mach (1838–1916). Mach argued, beginning in the 1880s with The Science of Mechanics (1883), that the mind had a fundamental structure, and that subjective experience was based on the interplay of parts of the mind. Freud's first major work was Studies on Hysteria (with Josef Breuer) (1895). According to Freud's ideas, all subjective reality was based on the play of basic drives and instincts, through which the outside world was perceived. As a philosopher of science Ernst Mach was a major influence on logical positivism, and through his criticism of Isaac Newton, a forerunner of Einstein's theory of relativity. According to these ideas of Mach, the relations of objects in nature were not guaranteed but known only through a sort of mental shorthand. This represented a break with the past, in that previously it was believed that external and absolute reality could impress itself, as it was, on an individual, as, for example, in John Locke's (1632–1704) empiricism, which saw the mind beginning as a tabula rasa (An Essay Concerning Human Understanding, 1690). Freud's description of subjective states, involving an unconscious mind full of primal impulses, and counterbalancing self-imposed restrictions, was combined by Carl Jung (1875–1961) with the idea of the collective unconscious, with which the conscious mind fought or embraced. While Charles Darwin's work remade the Aristotelian concept of "man, the animal" in the public mind, Jung suggested that human impulses toward breaking social norms were not the product of childishness, or ignorance, but rather derived from the essential nature of the human animal.
Friedrich Nietzsche was another major precursor of modernism[need quotation to verify] with a philosophy in which psychological drives, specifically the 'Will to power', were more important than facts, or things. Henri Bergson (1859–1941), on the other hand, emphasized the difference between scientific, clock time and the direct, subjective, human experience of time His work on time and consciousness "had a great influence on twentieth-century novelists," especially those modernists who used the stream of consciousness technique, such as Dorothy Richardson, Pointed Roofs, (1915), James Joyce, Ulysses (1922) and Virginia Woolf (1882–1941) Mrs Dalloway (1925), To the Lighthouse (1927). Also important in Bergson's philosophy was the idea of élan vital, the life force, which "brings about the creative evolution of everything" His philosophy also placed a high value on intuition, though without rejecting the importance of the intellect. These various thinkers were united by a distrust of Victorian positivism and certainty.
Out of this collision of ideals derived from Romanticism, and an attempt to find a way for knowledge to explain that which was as yet unknown, came the first wave of works, which, while their authors considered them extensions of existing trends in art, broke the implicit contract with the general public that artists were the interpreters and representatives of bourgeois culture and ideas. These "modernist" landmarks include the atonal ending of Arnold Schoenberg's Second String Quartet in 1908, the expressionist paintings of Wassily Kandinsky starting in 1903 and culminating with his first abstract painting and the founding of the Blue Rider group in Munich in 1911, and the rise of fauvism and the inventions of cubism from the studios of Henri Matisse, Pablo Picasso, Georges Braque and others in the years between 1900 and 1910.
Important literary precursors of Modernism were: Fyodor Dostoyevsky (1821–81) (Crime and Punishment (1866), The Brothers Karamazov (1880); Walt Whitman (1819–92) (Leaves of Grass) (1855–91); Charles Baudelaire (1821–67) (Les fleurs du mal), Rimbaud (1854–91) (Illuminations, 1874); August Strindberg (1849–1912), especially his later plays, including, the trilogy To Damascus 1898–1901, A Dream Play (1902), The Ghost Sonata (1907).
This modern movement broke with the past in the first three decades of the 20th century, and radically redefined various art forms. The following is a list of significant literary figures between 1900–1930 (though it includes a number whose careers extended beyond 1930):
- Blaise Cendrars (1887-1961) / "Les Pâques à New York" (1912, Éditions des Hommes Nouveaux), "La Prose du Transsibérien et la Petite Jehanne de France" (1913, Éditions des Hommes Nouveaux), "Séquences" (1913, Editions des Hommes Nouveaux),"La Guerre au Luxembourg" (1916, D. Niestlé), "Profond aujourd'hui" (1917, A la Belle Édition,"Le Panama ou les aventures de mes sept oncles" (1918, Éditions de la Sirène), "Dix-neuf poèmes élastiques"(1919, Au Sans Pareil).
- Anna Akhmatova (1889–1966)
- Mário de Andrade (1893–1945)
- Gabriele d'Annunzio (1863–1938)
- Guillaume Apollinaire (1880–1918)
- Samuel Beckett (1906 - 1989)
- Andrei Bely (1880–1934)
- Gottfried Benn (1886–1956)
- Ivan Cankar (1876–1918)
- Constantine P. Cavafy (1863–1933)
- Joseph Conrad (1857–1924)
- Alfred Döblin (1878–1957)
- H.D. (Hilda Doolittle) (1886–1961)
- T. S. Eliot (1888–1965)
- William Faulkner (1897–1962)
- F. Scott Fitzgerald (1896–1940)
- E. M. Forster (1879–1971)
- Ernest Hemingway (1899–1961)
- Hugo von Hofmannsthal (1874–1929)
- Max Jacob (1876–1944)
- James Joyce (1882–1941)
- Franz Kafka (1883–1924)
- Georg Kaiser (1878–1945)
- D. H. Lawrence (1885–1930)
- Wyndham Lewis (1882–1957)
- Thomas Mann (1875–1955)
- Eugene O'Neill (1888–1953)
- Fernando Pessoa (1888–1935)
- Mário de Sá-Carneiro (1890–1916)
- Ezra Pound (1885–1972)
- Marcel Proust (1871–1922)
- Dorothy Richardson (1873–1957)
- Rainer Maria Rilke (1875–1926)
- Gertrude Stein (1874–1946)
- Wallace Stevens (1875–1955)
- Italo Svevo (1861–1928)
- Ernst Toller (1893–1939)
- Georg Trakl (1887–1914)
- Paul Valéry (1871–1945)
- Robert Walser (1878–1956)
- William Carlos Williams (1883–1963)
- Frank Wedekind (1864–1918)
- Virginia Woolf (1882–1941)
- W. B. Yeats (1865–1939)
On the eve of the First World War a growing tension and unease with the social order, already seen in the Russian Revolution of 1905 and the agitation of "radical" parties, also manifested itself in artistic works in every medium which radically simplified or rejected previous practice. Young painters such as Pablo Picasso and Henri Matisse were causing a shock with their rejection of traditional perspective as the means of structuring paintings—a step that none of the impressionists, not even Cézanne, had taken. In 1907, as Picasso was painting Les Demoiselles d'Avignon, Oskar Kokoschka was writing Mörder, Hoffnung der Frauen (Murderer, Hope of Women), the first Expressionist play (produced with scandal in 1909), and Arnold Schoenberg was composing his String Quartet No.2 in F-sharp minor, his first composition "without a tonal center".
Cubism was brought to the attention of the general public for the first time in 1911 at the Salon des Indépendants in Paris (held 21 April – 13 June). Jean Metzinger, Albert Gleizes, Henri Le Fauconnier, Robert Delaunay, Fernand Léger and Roger de la Fresnaye were shown together in Room 41, provoking a 'scandal' out of which Cubism emerged and spread throughout Paris and beyond. Also in 1911, Kandinsky painted Bild mit Kreis (Picture With a Circle) which he later called the first abstract painting.
In 1912 Jean Metzinger and Albert Gleizes wrote the first (and only) major Cubist manifesto, Du "Cubisme", published in time for the Salon de la Section d'Or, the largest Cubist exhibition to date. In 1912 Metzinger painted and exhibited his enchanting La Femme au Cheval (Woman with a horse) and Danseuse au café (Dancer in a café). Albert Gleizes painted and exhibited his Les Baigneuses (The Bathers) and his monumental Le Dépiquage des Moissons (Harvest Threshing). This work, along with La Ville de Paris (City of Paris) by Robert Delaunay, is the largest and most ambitious Cubist painting undertaken during the pre-War Cubist period.
In 1913—the year of Edmund Husserl's Ideas, Niels Bohr's quantized atom, Ezra Pound's founding of imagism, the Armory Show in New York, and, in Saint Petersburg, the "first futurist opera," Victory Over the Sun—another Russian composer Igor Stravinsky, working in Paris for Sergei Diaghilev and the Ballets Russes, composed The Rite of Spring for a ballet, choreographed by Vaslav Nijinsky, that depicted human sacrifice.
These developments began to give a new meaning to what was termed "modernism": It embraced discontinuity, rejecting smooth change in everything from biology to fictional character development and filmmaking. It approved disruption, rejecting or moving beyond simple realism in literature and art, and rejecting or dramatically altering tonality in music. This set modernists apart from 19th-century artists, who had tended to believe not only in smooth change ("evolutionary" rather than "revolutionary") but also in the progressiveness of such change—"progress". Writers like Dickens and Tolstoy, painters like Turner, and musicians like Brahms were not radicals or "Bohemians", but were instead valued members of society who produced art that added to society, even when critiquing its less desirable aspects. Modernism, while still "progressive", increasingly saw traditional forms and traditional social arrangements as hindering progress, and therefore recast the artist as a revolutionary, overthrowing rather than enlightening.
Futurism exemplifies this trend. In 1909, the Parisian newspaper Le Figaro published F.T. Marinetti's first manifesto. Soon afterward a group of painters (Giacomo Balla, Umberto Boccioni, Carlo Carrà, Luigi Russolo, and Gino Severini) co-signed the Futurist Manifesto. Modeled on the famous "Communist Manifesto" of the previous century, such manifestoes put forward ideas that were meant to provoke and to gather followers. Strongly influenced by Bergson and Nietzsche, Futurism was part of the general trend of Modernist rationalization of disruption.
Modernist philosophy and art were still viewed as only a part of the larger social movement. Artists such as Klimt and Cézanne, and composers such as Mahler and Richard Strauss were "the terrible moderns"—those more avant-garde were more heard of than heard. Polemics in favor of geometric or purely abstract painting were largely confined to "little magazines" (like The New Age in the UK) with tiny circulations. Modernist primitivism and pessimism were controversial, but were not seen as representative of the Edwardian mainstream, which was more inclined towards a Victorian faith in progress and liberal optimism. Modernist art style derived from the influences of Cubism, most notably the work of Picasso. Modernist art was all about fragmentation versus order, the abstract and the symbolic. The newfound 'machine aesthetic' contradicted the old romantic, traditional styles, instead focusing on sharp lines, multi-facets and the lack of a human element.
However, the Great War and its subsequent events were the cataclysmic upheavals that late 19th-century artists such as Brahms had worried about, and avant-gardists had embraced. First, the failure of the previous status quo seemed self-evident to a generation that had seen millions die fighting over scraps of earth—prior to the war, it had been argued that no one would fight such a war, since the cost was too high. Second, the birth of a machine age changed the conditions of life—machine warfare became a touchstone of the ultimate reality. Finally, the immensely traumatic nature of the experience dashed basic assumptions: realism seemed bankrupt when faced with the fundamentally fantastic nature of trench warfare, as exemplified by books such as Erich Maria Remarque's All Quiet on the Western Front (1929). Moreover, the view that mankind was making slow and steady moral progress came to seem ridiculous in the face of the senseless slaughter. The First World War fused the harshly mechanical geometric rationality of technology with the nightmarish irrationality of myth.
Thus Modernism, which had been a minority taste before the war, came to define the 1920s. It appeared in Europe in such critical movements as Dada and then in constructive movements such as surrealism, as well as in smaller movements such as the Bloomsbury Group, which included British novelists Virginia Woolf and E. M. Forster. Again, impressionism was a precursor: breaking with the idea of national schools, artists and writers adopted ideas of international movements. Surrealism, cubism, Bauhaus, and Leninism are all examples of movements that rapidly found adopters far beyond their geographic origins.
Each of these "modernisms," as some observers labelled them at the time, stressed new methods to produce new results. The poet Ezra Pound's 1934 injunction to "Make it new!" was paradigmatic of the movement's approach towards the obsolete.
Exhibitions, theatre, cinema, books and buildings all served to cement in the public view the perception that the world was changing. Hostile reaction often followed, as paintings were spat upon, riots organized at the opening of works, and political figures denounced Modernism as unwholesome and immoral. At the same time, the 1920s were known as the "Jazz Age", and the public showed considerable enthusiasm for cars, air travel, the telephone and other technological advances.
By 1930, Modernism won a place in the establishment, including the political and artistic establishment, although by this time Modernism itself had changed. There was a general reaction in the 1920s against the pre-1918 Modernism, which emphasized its continuity with a past while rebelling against it, and against the aspects of that period which seemed excessively mannered, irrational, and emotionalistic. The post-World War period, at first, veered either to systematization or nihilism and had, as perhaps its most paradigmatic movement, Dada.
While some writers attacked the madness of the new Modernism, others described it as soulless and mechanistic. Among modernists there were disputes about the importance of the public, the relationship of art to audience, and the role of art in society. Modernism comprised a series of sometimes contradictory responses to the situation as it was understood, and the attempt to wrestle universal principles from it. In the end science and scientific rationality, often taking models from the 18th-century Enlightenment, came to be seen as the source of logic and stability, while the basic primitive sexual and unconscious drives, along with the seemingly counter-intuitive workings of the new machine age, were taken as the basic emotional substance. From these two seemingly incompatible poles, modernists began to fashion a complete weltanschauung that could encompass every aspect of life.
By 1930, Modernism had entered popular culture. With the increasing urbanization of populations, it was beginning to be looked to as the source for ideas to deal with the challenges of the day. As Modernism was studied in universities, it was developing a self-conscious theory of its own importance. Popular culture, which was not derived from high culture but instead from its own realities (particularly mass production) fueled much modernist innovation. By 1930 The New Yorker magazine began publishing new and modern ideas by young writers and humorists like Dorothy Parker, Robert Benchley, E. B. White, S. J. Perelman, and James Thurber, amongst others. Modern ideas in art appeared in commercials and logos, the famous London Underground logo, designed by Edward Johnston in 1919, being an early example of the need for clear, easily recognizable and memorable visual symbols.
Another strong influence at this time was Marxism. After the generally primitivistic/irrationalist aspect of pre-World War I Modernism, which for many modernists precluded any attachment to merely political solutions, and the neoclassicism of the 1920s, as represented most famously by T. S. Eliot and Igor Stravinsky—which rejected popular solutions to modern problems—the rise of Fascism, the Great Depression, and the march to war helped to radicalise a generation. The Russian Revolution of 1917 catalyzed the fusion of political radicalism and utopianism, with more expressly political stances. Bertolt Brecht, W. H. Auden, André Breton, Louis Aragon and the philosophers Antonio Gramsci and Walter Benjamin are perhaps the most famous exemplars of this modernist form of Marxism. This move to the radical left, however, was neither universal, nor definitional, and there is no particular reason to associate modernism, fundamentally, with 'the left'. Modernists explicitly of 'the right' include Salvador Dalí, Wyndham Lewis, T. S. Eliot, Ezra Pound, the Dutch author Menno ter Braak and others.
One of the most visible changes of this period was the adoption of new technologies into daily life of ordinary people. Electricity, the telephone, the radio, the automobile—and the need to work with them, repair them and live with them—created social change. The kind of disruptive moment that only a few knew in the 1880s became a common occurrence. For example, the speed of communication reserved for the stock brokers of 1890 became part of family life, at least in North America. Associated with urbanization and changing social mores also came smaller families and changed relationships between parents and their children.
Significant modernist literary works continued to be created in the 1920s and 1930s, including further novels by Marcel Proust, Virginia Woolf, Robert Musil, and Dorothy Richardson. The American modernist dramatist Eugene O'Neill's, career began in 1914, but his major works appeared in the 1920s and 1930s and early 1940s. Two other significant modernist dramatists writing in the 1920s and 1930s were Bertolt Brecht and Federico García Lorca. D. H. Lawrence's Lady Chatterley's Lover was privately published in 1928, while another important landmark for the history of the modern novel came with the publication of William Faulkner's The Sound and the Fury in 1929. In the 1930s, in addition to further major works by Faulkner, Samuel Beckett's published his first major work, the novel Murphy (1938). Then in 1939 James Joyce's Finnegan's Wake appeared. In poetry T. S. Eliot, E. E. Cummings, and Wallace Stevens were writing from the 1920s until the 1950s. While modernist poetry in English is often viewed as an American phenomenon, with leading exponents including Ezra Pound, T. S. Eliot, Marianne Moore, William Carlos Williams, H.D., and Louis Zukofsky, there were important British modernist poets, including David Jones, Hugh MacDiarmid, Basil Bunting, and W. H. Auden. European modernist poets include Federico García Lorca, Anna Akhmatova, Constantine Cavafy, and Paul Valéry.
After World War II (mainly the visual and performing arts)
Though The Oxford Encyclopedia of British Literature sees Modernism ending by c.1939, with regard to British and American literature, "When (if) Modernism petered out and postmodernism began has been contested almost as hotly as when the transition from Victorianism to Modernism occurred". Clement Greenberg sees Modernism ending in the 1930s, with the exception of the visual and performing arts, but with regard to music, Paul Griffiths notes that, while Modernism "seemed to be a spent force" by the late 1920s, after World War II, "a new generation of composers - Boulez, Barraqué, Babbitt, Nono, Stockhausen, Xenakis" revived modernism. In fact many literary modernists lived into the 1950s and 1960s, though generally speaking they were no longer producing major works. Amongst modernists still publishing were Wallace Stevens, Gottfried Benn, T. S. Eliot, Anna Akhmatova, William Faulkner, Dorothy Richardson, John Cowper Powys, and Ezra Pound. However, T. S. Eliot published two plays in the 1950s, while Basil Bunting, born in 1901, published his most important modernist poem Briggflatts in 1965. In addition Hermann Broch's The Death of Virgil was published in 1945 and Thomas Mann's Doctor Faustus in 1947. Then there is Samuel Beckett, born in 1906, a writer with roots in the expressionist tradition of modernism, who produced works from the 1930s until the 1980s, including Molloy (1951), En attendant Godot (1953), Happy Days (1961), Rockaby (1981). There are, however, those who see him as a post-modernist.
The post-war period left the capitals of Europe in upheaval with an urgency to economically and physically rebuild and to politically regroup. In Paris (the former center of European culture and the former capital of the art world) the climate for art was a disaster. Important collectors, dealers, and modernist artists, writers, and poets had fled Europe for New York and America. The surrealists and modern artists from every cultural center of Europe had fled the onslaught of the Nazis for safe haven in the United States. Many of those who didn't flee perished. A few artists, notably Pablo Picasso, Henri Matisse, and Pierre Bonnard, remained in France and survived.
The 1940s in New York City heralded the triumph of American abstract expressionism, a modernist movement that combined lessons learned from Henri Matisse, Pablo Picasso, surrealism, Joan Miró, cubism, Fauvism, and early Modernism via great teachers in America like Hans Hofmann and John D. Graham. American artists benefited from the presence of Piet Mondrian, Fernand Léger, Max Ernst and the André Breton group, Pierre Matisse's gallery, and Peggy Guggenheim's gallery The Art of This Century, as well as other factors.
Pollock and abstract influences
During the late 1940s Jackson Pollock's radical approach to painting revolutionized the potential for all contemporary art that followed him. To some extent Pollock realized that the journey toward making a work of art was as important as the work of art itself. Like Pablo Picasso's innovative reinventions of painting and sculpture in the early 20th century via cubism and constructed sculpture, Pollock redefined the way art gets made. His move away from easel painting and conventionality was a liberating signal to the artists of his era and to all who came after. Artists realized that Jackson Pollock's process—placing unstretched raw canvas on the floor where it could be attacked from all four sides using artistic and industrial materials; dripping and throwing linear skeins of paint; drawing, staining, and brushing; using imagery and non-imagery—essentially blasted artmaking beyond any prior boundary. Abstract expressionism generally expanded and developed the definitions and possibilities available to artists for the creation of new works of art.
The other abstract expressionists followed Pollock's breakthrough with new breakthroughs of their own. In a sense the innovations of Jackson Pollock, Willem de Kooning, Franz Kline, Mark Rothko, Philip Guston, Hans Hofmann, Clyfford Still, Barnett Newman, Ad Reinhardt, Robert Motherwell, Peter Voulkos and others opened the floodgates to the diversity and scope of all the art that followed them. Rereadings into abstract art by art historians such as Linda Nochlin, Griselda Pollock and Catherine de Zegher critically show, however, that pioneering women artists who produced major innovations in modern art had been ignored by official accounts of its history.
In the 1960s after abstract expressionism
In abstract painting during the 1950s and 1960s several new directions like hard-edge painting and other forms of geometric abstraction began to appear in artist studios and in radical avant-garde circles as a reaction against the subjectivism of abstract expressionism. Clement Greenberg became the voice of post-painterly Abstraction when he curated an influential exhibition of new painting that toured important art museums throughout the United States in 1964. Color Field painting, hard-edge painting and Lyrical Abstraction emerged as radical new directions.
By the late 1960s however, postminimalism, process art and Arte Povera also emerged as revolutionary concepts and movements that encompassed both painting and sculpture, via lyrical abstraction and the postminimalist movement, and in early conceptual art. Process art as inspired by Pollock enabled artists to experiment with and make use of a diverse encyclopedia of style, content, material, placement, sense of time, and plastic and real space. Nancy Graves, Ronald Davis, Howard Hodgkin, Larry Poons, Jannis Kounellis, Brice Marden, Bruce Nauman, Richard Tuttle, Alan Saret, Walter Darby Bannard, Lynda Benglis, Dan Christensen, Larry Zox, Ronnie Landfield, Eva Hesse, Keith Sonnier, Richard Serra, Sam Gilliam, Mario Merz and Peter Reginato were some of the younger artists who emerged during the era of late modernism that spawned the heyday of the art of the late 1960s.
In 1962 the Sidney Janis Gallery mounted The New Realists, the first major pop art group exhibition in an uptown art gallery in New York City. Janis mounted the exhibition in a 57th Street storefront near his gallery at 15 E. 57th Street. The show sent shockwaves through the New York School and reverberated worldwide. Earlier in England in 1958 the term "Pop Art" was used by Lawrence Alloway to describe paintings that celebrated consumerism of the post World War II era. This movement rejected abstract expressionism and its focus on the hermeneutic and psychological interior in favor of art that depicted and often celebrated material consumer culture, advertising, and iconography of the mass production age. The early works of David Hockney and the works of Richard Hamilton and Eduardo Paolozzi (who created the groundbreaking I was a Rich Man's Plaything, 1947) are considered seminal examples in the movement. Meanwhile in the downtown scene in New York's East Village 10th Street galleries, artists were formulating an American version of pop art. Claes Oldenburg had his storefront, and the Green Gallery on 57th Street began to show the works of Tom Wesselmann and James Rosenquist. Later Leo Castelli exhibited the works of other American artists, including those of Andy Warhol and Roy Lichtenstein for most of their careers. There is a connection between the radical works of Marcel Duchamp and Man Ray, the rebellious Dadaists with a sense of humor, and pop artists like Claes Oldenburg, Andy Warhol, and Roy Lichtenstein, whose paintings reproduce the look of Benday dots, a technique used in commercial reproduction.
By the early 1960s minimalism emerged as an abstract movement in art (with roots in geometric abstraction of Kazimir Malevich, the Bauhaus and Piet Mondrian) that rejected the idea of relational and subjective painting, the complexity of abstract expressionist surfaces, and the emotional zeitgeist and polemics present in the arena of action painting. Minimalism argued that extreme simplicity could capture all of the sublime representation needed in art. Associated with painters such as Frank Stella, minimalism in painting, as opposed to other areas, is a modernist movement. Minimalism is variously construed either as a precursor to postmodernism, or as a postmodern movement itself. In the latter perspective, early minimalism yielded advanced modernist works, but the movement partially abandoned this direction when some artists like Robert Morris changed direction in favor of the anti-form movement.
Hal Foster, in his essay The Crux of Minimalism, examines the extent to which Donald Judd and Robert Morris both acknowledge and exceed Greenbergian Modernism in their published definitions of minimalism. He argues that minimalism is not a "dead end" of modernism, but a "paradigm shift toward postmodern practices that continue to be elaborated today."
In the late 1960s Robert Pincus-Witten coined the term postminimalism to describe minimalist-derived art which had content and contextual overtones that minimalism rejected. The term was applied by Pincus-Whitten to the work of Eva Hesse, Keith Sonnier, Richard Serra and new work by former minimalists Robert Smithson, Robert Morris, and Sol LeWitt, and Barry Le Va, and others. Other minimalists including Donald Judd, Dan Flavin, Carl Andre, Agnes Martin, John McCracken and others continued to produce late modernist paintings and sculpture for the remainders of their careers.
Since then, many artists have embraced minimal or postminimal styles and the label "postmodern" has been attached to them.
Collage, assemblage, installations
Related to abstract expressionism was the emergence of combining manufactured items with artist materials, moving away from previous conventions of painting and sculpture. The work of Robert Rauschenberg exemplifies this trend. His "combines" of the 1950s were forerunners of pop art and installation art, and used assemblages of large physical objects, including stuffed animals, birds and commercial photographs. Rauschenberg, Jasper Johns, Larry Rivers, John Chamberlain, Claes Oldenburg, George Segal, Jim Dine, and Edward Kienholz were among important pioneers of both abstraction and pop art. Creating new conventions of art-making, they made acceptable in serious contemporary art circles the radical inclusion in their works of unlikely materials. Another pioneer of collage was Joseph Cornell, whose more intimately scaled works were seen as radical because of both his personal iconography and his use of found objects.
In the early 20th century Marcel Duchamp exhibited a urinal as a sculpture. He professed his intent that people look at the urinal as if it were a work of art because he said it was a work of art. He referred to his work as "readymades". Fountain was a urinal signed with the pseudonym R. Mutt, the exhibition of which shocked the art world in 1917. This and Duchamp's other works are generally labelled as Dada. Duchamp can be seen as a precursor to conceptual art, other famous examples being John Cage's 4'33", which is four minutes and thirty three seconds of silence, and Rauschenberg's Erased de Kooning Drawing. Many conceptual works take the position that art is the result of the viewer viewing an object or act as art, not of the intrinsic qualities of the work itself. Thus, because Fountain was exhibited, it was a sculpture.
Marcel Duchamp famously gave up "art" in favor of chess. Avant-garde composer David Tudor created a piece, Reunion (1968), written jointly with Lowell Cross, that features a chess game in which each move triggers a lighting effect or projection. Duchamp and Cage played the game at the work's premier.
Steven Best and Douglas Kellner identify Rauschenberg and Jasper Johns as part of the transitional phase, influenced by Marcel Duchamp, between Modernism and postmodernism. Both used images of ordinary objects, or the objects themselves, in their work, while retaining the abstraction and painterly gestures of high modernism.
Another trend in art associated with neo-Dada is the use of a number of different media together. Intermedia, a term coined by Dick Higgins and meant to convey new art forms along the lines of Fluxus, concrete poetry, found objects, performance art, and computer art. Higgins was publisher of the Something Else Press, a concrete poet, husband of artist Alison Knowles and an admirer of Marcel Duchamp.
Performance and happenings
During the late 1950s and 1960s artists with a wide range of interests began to push the boundaries of contemporary art. Yves Klein in France, and in New York City, Carolee Schneemann, Yayoi Kusama, Charlotte Moorman and Yoko Ono and in Germany Joseph Beuys, Wolf Vostell and Nam June Paik were pioneers of performance-based works of art. Groups like The Living Theater with Julian Beck and Judith Malina collaborated with sculptors and painters creating environments, radically changing the relationship between audience and performer especially in their piece Paradise Now. The Judson Dance Theater, located at the Judson Memorial Church, New York; and the Judson dancers, notably Yvonne Rainer, Trisha Brown, Elaine Summers, Sally Gross, Simonne Forti, Deborah Hay, Lucinda Childs, Steve Paxton and others; collaborated with artists Robert Morris, Robert Whitman, John Cage, Robert Rauschenberg, and engineers like Billy Klüver. Park Place Gallery was a center for musical performances by electronic composers Steve Reich, Philip Glass and other notable performance artists including Joan Jonas.
These performances were intended as works of a new art form combining sculpture, dance, and music or sound, often with audience participation. They were characterized by the reductive philosophies of minimalism and the spontaneous improvisation and expressivity of abstract expressionism. Images of Schneeman's performances of pieces meant to shock are occasionally used to illustrate these kinds of art, and she is often seen photographed while performing her piece Interior Scroll. However, the images of her performing this piece are illustrating precisely what performance art is not. In performance art, the performance itself is the medium. Other media cannot illustrate performance art. Performance art is performed, not captured. By its nature performance is momentary and evanescent, which is part of the point of the medium as art. Representations of performance art in other media, whether by image, video, narrative or otherwise, select certain points of view in space or time or otherwise involve the inherent limitations of each medium, and which therefore cannot truly illustrate the medium of performance as art.
During the same period, various avant-garde artists created Happenings. Happenings were mysterious and often spontaneous and unscripted gatherings of artists and their friends and relatives in various specified locations, often incorporating exercises in absurdity, physicality, costuming, spontaneous nudity, and various random or seemingly disconnected acts. Notable creators of happenings included Allan Kaprow—who first used the term in 1958, Claes Oldenburg, Jim Dine, Red Grooms, and Robert Whitman.
Another trend in art which has been associated with the term postmodern is the use of a number of different media together. Intermedia, a term coined by Dick Higgins and meant to convey new art forms along the lines of Fluxus, concrete poetry, found objects, performance art, and computer art. Higgins was the publisher of the Something Else Press, a concrete poet married to artist Alison Knowles and an admirer of Marcel Duchamp. Ihab Hassan includes, "Intermedia, the fusion of forms, the confusion of realms," in his list of the characteristics of postmodern art. One of the most common forms of "multi-media art" is the use of video-tape and CRT monitors, termed video art. While the theory of combining multiple arts into one art is quite old, and has been revived periodically, the postmodern manifestation is often in combination with performance art, where the dramatic subtext is removed, and what is left is the specific statements of the artist in question or the conceptual statement of their action.
Fluxus was named and loosely organized in 1962 by George Maciunas (1931–78), a Lithuanian-born American artist. Fluxus traces its beginnings to John Cage's 1957 to 1959 Experimental Composition classes at the New School for Social Research in New York City. Many of his students were artists working in other media with little or no background in music. Cage's students included Fluxus founding members Jackson Mac Low, Al Hansen, George Brecht and Dick Higgins.
Fluxus encouraged a do-it-yourself aesthetic and valued simplicity over complexity. Like Dada before it, Fluxus included a strong current of anti-commercialism and an anti-art sensibility, disparaging the conventional market-driven art world in favor of an artist-centered creative practice. Fluxus artists preferred to work with whatever materials were at hand, and either created their own work or collaborated in the creation process with their colleagues.
Andreas Huyssen criticises attempts to claim Fluxus for postmodernism as "either the master-code of postmodernism or the ultimately unrepresentable art movement – as it were, postmodernism's sublime." Instead he sees Fluxus as a major Neo-Dadaist phenomena within the avant-garde tradition. It did not represent a major advance in the development of artistic strategies, though it did express a rebellion against, "the administered culture of the 1950s, in which a moderate, domesticated Modernism served as ideological prop to the Cold War."
The continuation of abstract expressionism, color field painting, lyrical abstraction, geometric abstraction, minimalism, abstract illusionism, process art, pop art, postminimalism, and other late 20th-century modernist movements in both painting and sculpture continue through the first decade of the 21st century and constitute radical new directions in those mediums.
At the turn of the 21st century, well-established artists such as Sir Anthony Caro, Lucian Freud, Cy Twombly, Robert Rauschenberg, Jasper Johns, Agnes Martin, Al Held, Ellsworth Kelly, Helen Frankenthaler, Frank Stella, Kenneth Noland, Jules Olitski, Claes Oldenburg, Jim Dine, James Rosenquist, Alex Katz, Philip Pearlstein, and younger artists including Brice Marden, Chuck Close, Sam Gilliam, Isaac Witkin, Sean Scully, Mahirwan Mamtani, Joseph Nechvatal, Elizabeth Murray, Larry Poons, Richard Serra, Walter Darby Bannard, Larry Zox, Ronnie Landfield, Ronald Davis, Dan Christensen, Joel Shapiro, Tom Otterness, Joan Snyder, Ross Bleckner, Archie Rand, Susan Crile, and dozens of others continued to produce vital and influential paintings and sculpture.
Differences between Modernism and postmodernism
By the early 1980s the postmodern movement in art and architecture began to establish its position through various conceptual and intermedia formats. Postmodernism in music and literature began to take hold earlier. In music postmodernism is described in one reference work, as a "term introduced in the 1970s". while in British literature, The Oxford Encyclopedia of British Literature sees Modernism "ceding its predominance to postmodernism" as early as 1939. However dates are highly debatable, especially as according to Andreas Huyssen: "one critic's postmodernism is another critic's modernism". This includes those who are critical of the division between the two and see them as two aspects of the same movement, and believe that late Modernism continues.
Modernism is an encompassing label for a wide variety of cultural movements. Postmodernism is essentially a centralized movement that named itself, based on socio-political theory, although the term is now used in a wider sense to refer to activities from the 20th century onwards which exhibit awareness of and reinterpret the modern.
Postmodern theory asserts that the attempt to canonise Modernism "after the fact" is doomed to undisambiguable contradictions.
In a narrower sense, what was modernist was not necessarily also postmodern. Those elements of Modernism which accentuated the benefits of rationality and socio-technological progress were only modernist.
Goals of the movement
Rejection and detournement of tradition
Many modernists believed that by rejecting tradition they could discover radically new ways of making art. Arguably the most paradigmatic motive of Modernism is the rejection of the obsolescence of tradition and its reprise, incorporation, rewriting, recapitulation, revision and parody in new forms.
T. S. Eliot's emphasis on the relation of the artist to tradition. Eliot wrote the following:
- "[W]e shall often find that not only the best, but the most individual parts of [a poet's] work, may be those in which the dead poets, his ancestors, assert their immortality most vigorously."
Literary scholar Peter Childs sums up the complexity:
- "There were paradoxical if not opposed trends towards revolutionary and reactionary positions, fear of the new and delight at the disappearance of the old, nihilism and fanatical enthusiasm, creativity and despair."
These oppositions are inherent to modernism: it is in its broadest cultural sense the assessment of the past as different to the modern age, the recognition that the world was becoming more complex, and that the old "final authorities" (God, government, science, and reason) were subject to intense critical scrutiny.
Challenge to false harmony and coherence
- "Modernity is a qualitative, not a chronological, category. Just as it cannot be reduced to abstract form, with equal necessity it must turn its back on conventional surface coherence, the appearance of harmony, the order corroborated merely by replication."
Adorno understood modernity as the rejection of the false rationality, harmony, and coherence of Enlightenment thinking, art, and music. Arnold Schoenberg rejected traditional tonal harmony, the hierarchical system of organizing works of music that had guided music making for at least a century and a half. He believed he had discovered a wholly new way of organizing sound, based in the use of twelve-note rows.
Abstract artists, taking as their examples the impressionists, as well as Paul Cézanne and Edvard Munch, began with the assumption that color and shape, not the depiction of the natural world, formed the essential characteristics of art. Wassily Kandinsky, Piet Mondrian, and Kazimir Malevich all believed in redefining art as the arrangement of pure color. The use of photography, which had rendered much of the representational function of visual art obsolete, strongly affected this aspect of modernism. However, these artists also believed that by rejecting the depiction of material objects they helped art move from a materialist to a spiritualist phase of development.
Pragmatic modernist architecture
Other modernists, especially those involved in design, had more pragmatic views. Modernist architects and designers believed that new technology rendered old styles of building obsolete. Le Corbusier thought that buildings should function as "machines for living in", analogous to cars, which he saw as machines for traveling in. Just as cars had replaced the horse, so modernist design should reject the old styles and structures inherited from Ancient Greece or from the Middle Ages. In some cases form superseded function. Following this machine aesthetic, modernist designers typically rejected decorative motifs in design, preferring to emphasize the materials used and pure geometrical forms. The skyscraper, such as Ludwig Mies van der Rohe's Seagram Building in New York (1956–1958), became the archetypal modernist building. Modernist design of houses and furniture also typically emphasized simplicity and clarity of form, open-plan interiors, and the absence of clutter.
Modernism reversed the 19th-century relationship of public and private: in the 19th century, public buildings were horizontally expansive for a variety of technical reasons, and private buildings emphasized verticality—to fit more private space on increasingly limited land. Conversely, in the 20th century, public buildings became vertically oriented and private buildings became organized horizontally. Many aspects of modernist design still persist within the mainstream of contemporary architecture today, though its previous dogmatism has given way to a more playful use of decoration, historical quotation, and spatial drama.In other arts such pragmatic considerations were less important.
Counter consumerism and mass culture
In literature and visual art some modernists sought to defy expectations mainly in order to make their art more vivid, or to force the audience to take the trouble to question their own preconceptions. This aspect of Modernism has often seemed a reaction to consumer culture, which developed in Europe and North America in the late 19th century. Whereas most manufacturers try to make products that will be marketable by appealing to preferences and prejudices, high modernists rejected such consumerist attitudes in order to undermine conventional thinking. The art critic Clement Greenberg expounded this theory of Modernism in his essay Avant-Garde and Kitsch. Greenberg labelled the products of consumer culture "kitsch", because their design aimed simply to have maximum appeal, with any difficult features removed. For Greenberg, Modernism thus formed a reaction against the development of such examples of modern consumer culture as commercial popular music, Hollywood, and advertising. Greenberg associated this with the revolutionary rejection of capitalism.
Some modernists did see themselves as part of a revolutionary culture—one that included political revolution. Others rejected conventional politics as well as artistic conventions, believing that a revolution of political consciousness had greater importance than a change in political structures. Many modernists saw themselves as apolitical. Others, such as T. S. Eliot, rejected mass popular culture from a conservative position. Some even argue that Modernism in literature and art functioned to sustain an elite culture which excluded the majority of the population.
Criticism and hostility
||This section needs additional citations for verification. (January 2013)|
Modernism's stress on freedom of expression, experimentation, radicalism, and primitivism disregards conventional expectations. In many art forms this often meant startling and alienating audiences with bizarre and unpredictable effects, as in the strange and disturbing combinations of motifs in surrealism or the use of extreme dissonance and atonality in modernist music. In literature this often involved the rejection of intelligible plots or characterization in novels, or the creation of poetry that defied clear interpretation.
After the rise of Joseph Stalin, the Soviet Communist government rejected Modernism on the grounds of alleged elitism, although it had previously endorsed futurism and constructivism. The Nazi government of Germany deemed Modernism narcissistic and nonsensical, as well as "Jewish" (see Anti-semitism) and "Negro". The Nazis exhibited modernist paintings alongside works by the mentally ill in an exhibition entitled Degenerate Art. Accusations of "formalism" could lead to the end of a career, or worse. For this reason many modernists of the post-war generation felt that they were the most important bulwark against totalitarianism, the "canary in the coal mine", whose repression by a government or other group with supposed authority represented a warning that individual liberties were being threatened. Louis A. Sass compared madness, specifically schizophrenia, and Modernism in a less fascist manner by noting their shared disjunctive narratives, surreal images, and incoherence.
In fact, Modernism flourished mainly in consumer/capitalist societies, despite the fact that its proponents often rejected consumerism itself. However, high modernism began to merge with consumer culture after World War II, especially during the 1960s. In Britain, a youth sub-culture emerged calling itself "modernist" (usually shortened to Mod), following such representative music groups as The Who and The Kinks. The likes of Bob Dylan, Serge Gainsbourg and The Rolling Stones combined popular musical traditions with modernist verse, adopting literary devices derived from James Joyce, Samuel Beckett, James Thurber, T. S. Eliot, Guillaume Apollinaire, Allen Ginsberg, and others. The Beatles developed along similar lines, creating various modernist musical effects on several albums, while musicians such as Frank Zappa, Syd Barrett and Captain Beefheart proved even more experimental. Modernist devices also started to appear in popular cinema, and later on in music videos. Modernist design also began to enter the mainstream of popular culture, as simplified and stylized forms became popular, often associated with dreams of a space age high-tech future.
This merging of consumer and high versions of modernist culture led to a radical transformation of the meaning of "modernism". First, it implied that a movement based on the rejection of tradition had become a tradition of its own. Second, it demonstrated that the distinction between elite modernist and mass consumerist culture had lost its precision. Some writers[who?] declared that Modernism had become so institutionalized that it was now "post avant-garde", indicating that it had lost its power as a revolutionary movement. Many have interpreted this transformation as the beginning of the phase that became known as postmodernism. For others, such as art critic Robert Hughes, postmodernism represents an extension of modernism.
"Anti-modern" or "counter-modern" movements seek to emphasize holism, connection and spirituality as remedies or antidotes to modernism. Such movements see Modernism as reductionist, and therefore subject to an inability to see systemic and emergent effects. Many modernists came to this viewpoint, for example Paul Hindemith in his late turn towards mysticism. Writers such as Paul H. Ray and Sherry Ruth Anderson, in The Cultural Creatives: How 50 Million People Are Changing the World (2000), Fredrick Turner in A Culture of Hope and Lester Brown in Plan B, have articulated a critique of the basic idea of Modernism itself – that individual creative expression should conform to the realities of technology. Instead, they argue, individual creativity should make everyday life more emotionally acceptable.
In some fields the effects of Modernism have remained stronger and more persistent than in others. Visual art has made the most complete break with its past. Most major capital cities have museums devoted to Modern Art as distinct from post-Renaissance art (circa 1400 to circa 1900). Examples include the Museum of Modern Art in New York, the Tate Modern in London, and the Centre Pompidou in Paris. These galleries make no distinction between modernist and postmodernist phases, seeing both as developments within Modern Art.
- Late modernism
- American modernism
- Russian avant-garde
- Modern architecture
- Contemporary architecture
- Modern art
- Contemporary art
- Postmodern art
- Experimental film
- Modernism (music)
- 20th-century classical music
- History of classical music traditions(section 20th century music)
- Contemporary classical music
- Experimental music
- Contemporary literature
- Contemporary French literature
- Modern literature
- Modernist literature
- Experimental literature
- Modernist poetry
- Modernist poetry in English
- History of theatre
- Theatre of the Absurd
- Hans Hofmann biography. Retrieved 30 January 2009
- Barth (1979) quotation:
The ground motive of modernism, Graff asserts, was criticism of the nineteenth-century bourgeois social order and its world view. Its artistic strategy was the self-conscious overturning of the conventions of bourgeois realism [...] the antirationalist, antirealist, antibourgeois program of Modernism [...] the modernists, carrying the torch of romanticism, taught us that linearity, rationality, consciousness, cause and effect, naïve illusionism, transparent language, innocent anecdote, and middle-class moral conventions are not the whole story
- Graff (1973)
- Graff (1975).
- Eco (1990) p.95 quote:
Each of the types of repetition that we have examined is not limited to the mass media but belongs by right to the entire history of artistic creativity; plagiarism, quotation, parody, the ironic retake are typical of the entire artistic-literary tradition.
Much art has been and is repetitive. The concept of absolute originality is a contemporary one, born with Romanticism; classical art was in vast measure serial, and the "modern" avant-garde (at the beginning of this century) challenged the Romantic idea of "creation from nothingness," with its techniques of collage, mustachios on the Mona Lisa, art about art, and so on.
- Steiner (1998) pp. 489–90 quote:
(pp.489–90) The modernist movement which dominated art, music, letters during the first half of the century was, at critical points, a strategy of conservation, of custodianship. Stravinsky's genius developed through phases of recapitulation. He took from Machaut, Gesualdo, Monteverdi. He mimed Tchaikovsky and Gounod, the Beethoven piano sonatas, the symphonies of Haydn, the operas of Pergolesi and Glinks. He incorporated Debussy and Webern into his own idiom. In each instance the listener was meant to recognize the source, to grasp the intent of a transformation which left salient aspects of the original intact. The history of Picasso is marked by retrospection. The explicit variations on classical pastoral themes, the citations from and pastiches of Rembrandt, Goya, Velazquez, Manet, are external products of a constant revision, a 'seeing again' in the light of technical and cultural shifts. Had we only Picasso's sculptures, graphics, and paintings, we could reconstruct a fair portion of the development of the arts from the Minoan to Cezanne. In twentieth-century literature, the elements of reprise have been obsessive, and they have organized precisely those texts which at first seemed most revolutionary. 'The Waste Land', Ulysses, Pound's Cantos are deliberate assemblages, in-gatherings of a cultural past felt to be in danger of dissolution. The long sequence of imitations, translations, masked quotations, and explicit historical paintings in Robert Lowell's History has carried the same technique into the 1970s. [...] In Modernism collage has been the representative device. The new, even at its most scandalous, has been set against an informing background and framework of tradition. Stravinsky, Picasso, Braque, Eliot, Joyce, Pound—the 'makers of the new'—have been neo-classics, often as observant of canonic precedent as their seventeenth-century forbears.
- Childs, Peter Modernism (Routledge, 2000). ISBN 0-415-19647-7. p. 17. Accessed on 8 February 2009.
- Pericles Lewis, Modernism, Nationalism, and the Novel (Cambridge University Press, 2000). pp 38–39.
- "[James] Joyce's Ulysses is a comedy not divine, ending, like Dante's, in the vision of a God whose will is our peace, but human all-too-human...." Peter Faulkner, Modernism (Taylor & Francis, 1990). p 60.
- Adorno, Theodor. Minima Moralia. Verso 2005, p. 218.
- Gardner, Helen, Horst De la Croix, Richard G. Tansey, and Diane Kirkpatrick. Gardner's Art Through the Ages (San Diego: Harcourt Brace Jovanovich, 1991). ISBN 0-15-503770-6. p. 953.
- Orton and Pollock (1996) p.141 quote:
The term avant-garde had a shorter provenance in the language and literature of art. It was not until the 20th century that its military or naval meaning (the foremost division or detachment of an advancing force) or the political usage (an elite party to lead the masses) was appropriated by art criticism. Modernist art history has evacuated the term's historical meanings, using it to signify an idea about the way in which art develops and artists function in relation to society.
- "In the twentieth century, the social processes that bring this maelstrom into being, and keep it in a state of perpetual becoming, have come to be called 'modernization'. These world-historical processes have nourished an amazing variety of visions and ideas that aim to make men and women the subjects as well as the objects of modernization, to give them the power to change the world that is changing them, to make their way through the maelstrom and make it their own. Over the past century, these visions and values have come to be loosely grouped together under the name of 'modernism'" (Berman 1988, 16).
- Lee Oser, The Ethics of Modernism: Moral ideas in Yeats, Eliot, Joyce, Woolf and Beckett (Cambridge University Press, 2007); F.J. Marker & C.D. Innes, Modernism in European Drama: Ibsen, Stringdberg, Pirandello, Beckett; Morag Shiach, "Situating Samuel Beckett" pp234-247 in The Cambridge Companion to the Modernist Novel, (Cambridge University Press, 2007); Kathryne V. Lindberg, Reading Pound Reading: Modernism After Nietzsche (Oxford University Press, 1987); Pericles Lewis, The Cambridge Introduction to Modernism (Cambridge University Press, 2007). pp. 21.
- "J.M.W. Turner." Encyclopædia Britannica. Encyclopædia Britannica Online. Encyclopædia Britannica Inc., 2013. Web. 16 Jan. 2013. http://www.britannica.com/EBchecked/topic/610274/J-M-W-Turner.
- The Bloomsbury Guide to English Literature, ed. Marion Wynnne-Davies. New York: Prentice Hall, 1990, p. 815.
- The Bloomsbury Guide, p. 816.
- The First Moderns: Profiles in the Origin of Twentieth-Century Thought. Chicago: University of Chicago Press, 1997, Chapters 3 & 4.
- Frascina and Harrison 1982, p. 5.
- Clement Greenberg: Modernism and Postmodernism, seventh paragraph of the essay. Accessed on 15 June 2006
- Phillip Dennis Cate and Mary Shaw, eds., The Spirit of Montmartre: Cabarets, Humor, and the Avant-Garde, 1875–1905. New Brunswick, NJ: Rutgers University, 1996.
- Guy Debord, 18 November 1958, as quoted in Supreme Height of the Defenders of Surrealism in Paris and the Revelation of their Real Value, Situationist International #2
- The Oxford Companion to English Literature, ed. Margaret Drabble, Oxford: Oxford University Press, 1996, p. 966.
- Diané Collinson, Fifty Major Philosophers: A Reference Guide, p. 131.
- The Bloomsbury Guides to English Literature: The Twentith Century, ed. Linda R. Williams. London: Bloomsbury, 1992, pp. 108–9.
- Collinson, 132.
- Ulysses, has been called "a demonstration and summation of the entire [Modernist] movement". Beebe, Maurice (Fall 1972). "Ulysses and the Age of Modernism". James Joyce Quarterly (University of Tulsa) 10 (1): p. 176.
- Robbins, Daniel, Albert Gleizes 1881–1953, A Retrospective Exhibition (exh. cat.). The Solomon R. Guggenheim Museum, New York, 1964, pp. 12–25
- Degenerate Art Database (Beschlagnahme Inventar, Entartete Kunst)
- Ezra Pound Make it New. (London: Faber, 1934.
- Kevin J. H. Dettmar "Modernism". David Scott Kastan. Oxford University Press 2005. <http://www.oxfordreference.com 27 October 2011
- "modernism", The Oxford Companion to English Literature. Edited by Dinah Birch. Oxford University Press Inc. Oxford Reference Online. Oxford University Press. http://www.oxfordreference.com 27 October 2011
- Clement Greenberg: Modernism and Postmodernism, William Dobell Memorial Lecture, Sydney, Australia, 31 October 1979, Arts 54, No.6 (February 1980). His final essay on modernism. Retrieved 26 October 2011
- Paul Griffiths "modernism" The Oxford Companion to Music. Ed. Alison Latham. Oxford University Press, 2002. Oxford Reference Online. Oxford University Press. http://www.oxfordreference.com 27 October 2011
- The Cambridge Companion to Irish Literature, ed. John Wilson Foster. Cambridge: Cambridge University Press, 2006.
- Nochlin, Linda, Ch.1 in: Women Artists at the Millennium (edited by C. Armstrong and C. de Zegher) MIT Press, 2006.
- Pollock, Griselda, Encounters in the Virtual Feminist Museum: Time, Space and the Archive. Routledge, 2007.
- De Zegher, Catherine, and Teicher, Hendel (eds.), 3 X Abstraction. New Haven: Yale University Press. 2005.
- Aldrich, Larry. Young Lyrical Painters, Art in America, v.57, n6, November–December 1969, pp.104–113.
- Movers and Shakers, New York, "Leaving C&M", by Sarah Douglas, Art+Auction, March 2007, V.XXXNo7.
- Martin, Ann Ray, and Howard Junker. "The New Art: It's Way, Way Out", Newsweek, 29 July 1968: pp. 3, 55–63.
- Hal Foster, The Return of the Real: The Avant-garde at the End of the Century, MIT Press, 1996, pp. 44–53. ISBN 0-262-56107-7
- Craig Owens, Beyond Recognition: Representation, Power, and Culture, London and Berkeley: University of California Press (1992), pp. 74–75.
- Steven Best, Douglas Kellner, The Postmodern Turn, Guilford Press, 1997, p. 174. ISBN 1-57230-221-6
- "Fluxus & Happening – Allan Kaprow". Retrieved 4 May 2010. Text "Chronology" ignored (help)
- Finkel, Jori (13 April 2008). "Happenings Are Happening Again". The New York Times. Retrieved 23 April 2010.
- Ihab Hassan in Lawrence E. Cahoone, From Modernism to Postmodernism: An Anthology, Blackwell Publishing, 2003. p. 13. ISBN 0-631-23213-3
- Andreas Huyssen, Twilight Memories: Marking Time in a Culture of Amnesia, Routledge, 1995. p. 192. ISBN 0-415-90934-1
- Andreas Huyssen, Twilight Memories: Marking Time in a Culture of Amnesia, Routledge, 1995. p. 196. ISBN 0-415-90934-1
- Ratcliff, Carter. The New Informalists, Art News, v. 68, n. 8, December 1969, p. 72.
- Barbara Rose. American Painting. Part Two: The Twentieth Century. Published by Skira – Rizzoli, New York, 1969
- Walter Darby Bannard. "Notes on American Painting of the Sixties." Artforum, January 1970, vol. 8, no. 5, pp. 40–45.
- "postmodernism", The Penguin Companion to Classical Music, ed. Paul Griffiths. London: Penguin, 2004.
- J. H. Dettmar "Modernism", The Oxford Encyclopedia of British Literature. David Scott Kastan. Oxford University Press 2005. http://www.oxfordreference.com 27 October 2011.
- After the Great Divide: Modernism, Mass Culture and Postmodernism. London: Macmillan, 1988, p. 59. Quoted in Hawthorn, Studying the Novel, p. 63; Simon Malpas, Postmodern Debates,
- Simon Malpas, Postmodern Debates
- Merriam-Webster's definition of postmodernism
- Ruth Reichl, Cook's November 1989; American Heritage Dictionary's definition of the postmodern
- Postmodernism. Georgetown university
- Wagner, British, Irish and American Literature, Trier 2002, p. 210–2
- T. S. Eliot Tradition and the individual talent" (1919), in Selected Essays. Paperback Edition. (Faber & Faber, 1999).
- Clement Greenberg, Art and Culture, Beacon Press, 1961
- Sass, Louis A. (1992). Madness and Modernism: Insanity in the Light of Modern Art, Literature, and Thought. New York: Basic Books. Cited in Bauer, Amy (2004). "Cognition, Constraints, and Conceptual Blends in Modernist Music", in The Pleasure of Modernist Music. ISBN 1-58046-143-3.
- Jack, Ian (6 June 2009). "Set In Stone". Guardian.
- John Barth (1979) The Literature of Replenishment, later republished in The Friday Book (1984).
- Eco, Umberto (1990) Interpreting Serials in The limits of interpretation, pp. 83–100, excerpt
- Gerald Graff (1973) The Myth of the Postmodernist Breakthrough, TriQuarterly, 26 (Winter, 1973) 383–417; rept in The Novel Today: Contemporary Writers on Modern Fiction Malcolm Bradbury, ed., (London: Fontana, 1977); reprinted in Proza Nowa Amerykanska, ed., Szice Krytyczne (Warsaw, Poland, 1984); reprinted in Postmodernism in American Literature: A Critical Anthology, Manfred Putz and Peter Freese, eds., (Darmstadt: Thesen Verlag, 1984), 58–81.
- Gerald Graff (1975) Babbitt at the Abyss: The Social Context of Postmodern. American Fiction, TriQuarterly, No. 33 (Spring 1975), pp. 307–37; reprinted in Putz and Freese, eds., Postmodernism and American Literature.
- Orton, Fred and Pollock, Griselda (1996) Avant-Gardes and Partisans Reviewed, Manchester University.
- Steiner, George (1998) After Babel, ch.6 Topologies of culture, 3rd revised edition
- Armstrong, Carol and de Zegher, Catherine (eds.), Women Artists as the Millennium, Cambridge, MA: October Books, MIT Press, 2006. ISBN 978-0-262-01226-3.
- Aspray, William & Philip Kitcher, eds., History and Philosophy of Modern Mathematics, Minnesota Studies in the Philosophy of Science vol XI, Minneapolis: University of Minnesota Press, 1988
- Baker, Houston A., Jr., Modernism and the Harlem Renaissance, Chicago: University of Chicago Press, 1987
- Berman, Marshall, All That Is Solid Melts Into Air: The Experience of Modernity. Second ed. London: Penguin, 1988. ISBN 0-14-010962-5.
- Bradbury, Malcolm, & James McFarlane (eds.), Modernism: A Guide to European Literature 1890–1930 (Penguin "Penguin Literary Criticism" series, 1978, ISBN 0-14-013832-3).
- Brush, Stephen G., The History of Modern Science: A Guide to the Second Scientific Revolution, 1800–1950, Ames, IA: Iowa State University Press, 1988
- Centre George Pompidou, Face a l'Histoire, 1933–1996. Flammarion, 1996. ISBN 2-85850-898-4.
- Crouch, Christopher, Modernism in art design and architecture, New York: St. Martins Press, 2000
- Everdell, William R., The First Moderns: Profiles in the Origins of Twentieth Century Thought, Chicago: University of Chicago Press, 1997
- Eysteinsson, Astradur, The Concept of Modernism, Ithaca, NY: Cornell University Press, 1992
- Friedman, Julia. Beyond Symbolism and Surrealism: Alexei Remizov's Synthetic Art, Northwestern University Press, 2010. ISBN 0-8101-2617-6 (Trade Cloth)
- Frascina, Francis, and Charles Harrison (eds.). Modern Art and Modernism: A Critical Anthology. Published in association with The Open University. London: Harper and Row, Ltd. Reprinted, London: Paul Chapman Publishing, Ltd., 1982.
- Gates, Henry Louis. "The Norton Anthology of African American Literature. W.W. Norton & Company, Inc., 2004.
- Hughes, Robert, The Shock of the New: Art and the Century of Change (Gardners Books, 1991, ISBN 0-500-27582-3).
- Kenner, Hugh, The Pound Era (1971), Berkeley, CA: University of California Press, 1973
- Kern, Stephen, The Culture of Time and Space, Cambridge, MA: Harvard University Press, 1983
- Kolocotroni, Vassiliki et al., ed.,Modernism: An Anthology of Sources and Documents (Edinburgh: Edinburgh University Press, 1998).
- Levenson, Michael (ed.), The Cambridge Companion to Modernism (Cambridge University Press, "Cambridge Companions to Literature" series, 1999, ISBN 0-521-49866-X).
- Lewis, Pericles. The Cambridge Introduction to Modernism (Cambridge: Cambridge University Press, 2007).
- Nicholls, Peter, Modernisms: A Literary Guide (Hampshire and London: Macmillan, 1995).
- Pevsner, Nikolaus, Pioneers of Modern Design: From William Morris to Walter Gropius (New Haven, CT: Yale University Press, 2005, ISBN 0-300-10571-1).
- —, The Sources of Modern Architecture and Design (Thames & Hudson, "World of Art" series, 1985, ISBN 0-500-20072-6).
- Pollock, Griselda, Generations and Geographies in the Visual Arts. (Routledge, London, 1996. ISBN 0-415-14128-1)
- Pollock, Griselda, and Florence, Penny, Looking Back to the Future: Essays by Griselda Pollock from the 1990s. (New York: G&B New Arts Press, 2001. ISBN 90-5701-132-8)
- Potter, Rachael (January 2009). "Obscene Modernism and the Trade in Salacious Books". Modernism/modernity (The Johns Hopkins University Press) 16 (1). ISSN 1071-6068.
- Sass, Louis A. (1992). Madness and Modernism: Insanity in the Light of Modern Art, Literature, and Thought. New York: Basic Books. Cited in Bauer, Amy (2004). "Cognition, Constraints, and Conceptual Blends in Modernist Music", in The Pleasure of Modernist Music. ISBN 1-58046-143-3.
- Schorske, Carl. Fin-de-Siècle Vienna: Politics and Culture. Vintage, 1980. 978-0394744780.
- Schwartz, Sanford, The Matrix of Modernism: Pound, Eliot, and Early Twentieth Century Thought, Princeton, NJ: Princeton University Press, 1985
- Van Loo, Sofie (ed.), Gorge(l). Royal Museum of Fine Arts, Antwerp, 2006. ISBN 90-76979-35-9; ISBN 978-90-76979-35-9.
- Weston, Richard, Modernism (Phaidon Press, 2001, ISBN 0-7148-4099-8).
- de Zegher, Catherine, Inside the Visible. (Cambridge, MA: MIT Press, 1996).
|Look up modernism in Wiktionary, the free dictionary.|
|Wikimedia Commons has media related to: Modernism|
- Ballard, J.G., on Modernism.
- Denzer, Anthony S., PhD, Masters of Modernism.
- Hoppé, E.O., photographer, Edwardian Modernists.
- Malady of Writing. Modernism you can dance to An online radio show that presents a humorous version of modernism
- Modernism Lab @ Yale University
- Modernism/Modernity, official publication of the Modernist Studies Association
- Modernism vs. Postmodernism
- Pope St. Pius X's encyclical Pascendi, in which he defines Modernism as "the synthesis of all heresies". | 1 | 66 |
<urn:uuid:64eef719-e2f0-4ff5-8206-aca8f2ab30fa> | The Spirit Of Moncada: Fidel Castro's Rise To Power, 1953 - 1959 CSC 1984 SUBJECT AREA Foreign Policy ABSTRACT Author: BOCKMAN, Larry James, Major, U.S. Marine Corps Title: The Spirit of Moncada: Fidel Castro's Rise to Power, 1953-1959 Publisher: Marine Corps Command and Staff College Date: 1 April 1984 Since his overthrow of President Batista in 1959, the degree of influence that Fidel Castro has exercised over worldwide political and military events has been astounding. His reach has far exceeded the borders of the tiny island nation he rules. Not infrequently, great and emerging nations alike have altered their most diligent strategies in response to the Cuban leader's interpretation of the world order. How did an obscure, middle-class lawyer with no military training first rise to such prominence? The object of this study is to discover the answer to that question. The essay opens with a brief discussion of Cuba's geographic, demographic and historic heritages. This is followed by a section that outlines the major economic, social, political and military factors which forced the climate for Castro's insurrection. The main body of the study follows with an examination of the insurrection itself. Included are detailed historical events, strategies and tactics, beginning with Castro's background and proceeding through his emergence at the head of the Cuban government. Both sides of the conflict are presented, where appropriate, to maintain balance. The final section of the paper contains an analysis of the major elements leading to Castro's victory. These encompass, among others: the role of the United States, Castro's guerrilla warfare philosophy, Batista's counter-guerrilla tactics, and the Castro persona. This study relied heavily upon previously published documents and books concerning various aspects of Castro's background and rise to power. Particularly useful were those works written through eyewitness accounts of the actual events addressed in the paper. WAR SINCE 1945 SEMINAR The Spirit of Moncada: Fidel Castro's Rise to Power, 1953-1959 Major Larry James Bockman, USMC 2 April 1984 Marine Corps Command and Staff College Marine Corps Development and Education Command Quantico, Virginia 22134 ACKNOWLEDGEMENTS I owe a debt of gratitude to a number of people for their professional assistance, guidance, morale support and encouragement. They have my sincere appreciation. Of the many, I would like to single-out for special thanks Lieutenant Colonels Donald F. Bittner and James F. Foster. Their editorial and conceptual assistance plus personal encouragement assisted me over several obstacles. I would especially like to acknowledge my debt to the staff of Breckinridge Library, and particularly Ms. Mary Porter, the Reference Librarian. Her capable and efficient assistance in securing books and documents was superb. Likewise, I would like to thank Ms. Pam Lohman for cheerfully and expertly typing this manuscript, and for never being discouraged by revisions or deadlines. Finally, I would like to thank my wife, Karen. Without her editorial assistance, moral support and encouragement, I would not have completed this project. TABLE OF CONTENTS Page Introduction 1 Chapter 1. Background 4 Geographic 4 Demographic 5 Historical 6 Notes 17 2. Inducing Factors to Revolution 18 Economic 18 Social 25 Political 28 Military 29 Notes 31 3. Castro's Insurrection 32 The Sergeant and the attorney 32 Moncada 44 Movimiento 26 de Julio 47 Sierra Maestra 55 Total War 75 Batista's Departure 94 Notes 97 Page 4. Castro's Revolution 102 At Long Last; Victory 103 The Communist State 106 Notes 111 5. Analyses and Conclusion 112 Buerrilla Warfare a la Castro 113 Internal Defense 119 Neutralization of the United States 124 El Caudillo 128 Conclusion 130 Notes 132 Maps 133 Bibliography 137 INTRODUCTION In the early 1950's there were many Cubans who believed that their country was in the midst of a gradual revolution that had begun as early as 1930. Political and economic upheaval and social rebellion had become commonplace and expected throughout Cuban society. Most viewed this process as disruptive, but nonetheless necessary if Cuba was ever to attain constitutionality and honest government. Among those Cubans was Fidel Castro, a young lawyer just entering practice. Having earned an early reputation as a champion of the oppressed and underprivileged, Castro was anxious to use his political skills to harness and guide the Cuban revolutionary spirit. The spirit was crushed, however, when Fulgencio Batista seized control of the Cuban government in 1952. The insurrection which Castro orchestrated between 1953 and 1959 wad designed to revitalize the interrupted Cuban revolution and install Fidel Castro as its epicenter. The purpose of this essay is to examine the Cuban Onsurgency of 1953-1959, focusing on Fidel Castro's role. The premise of this effort is that the detailed examination of Castro's rise to power and Batista's attempts to stop him can increase our understanding of the evolution of insurgencies and the difficulties associated with countering them. The study's objective is to achieve that understanding by discovering how Castro won, or perhaps more importantly, why Batista lost. The scope of the study is limited primarily to events occurring on the island of Cuba between 1953 and 1959. No attempt is made to consider other worldly events unless there is some direct relationship. Likewise, recent Cuban history beyond Castro's consolidation of power in 1959 is omitted. Finally, discussions and comparisons of various revolutionary warfare ideologies are left to another time and place. The terms insurrection, revolution, rebellion and their derivatives are used interchangeably throughout. The paper is organized into five chapters following the introduction. Maps are provided at the end, just prior to the bibliography. Chapter I provides a brief background study of Cuba's geography, people and history. Keying on the background established in Chapter I, Chapter II distills and investigates the economic, social, political and military factors which fueled Cuba's revolutionary fervor and ultimately led to Fidel Castro's insurrection. Chapters III and IV give a detailed account of the rebellion and Castro's consoldation of power, spanning the years 1953- 1959. in general, analysis and conclusions are withheld until the final chapter. Much has been written about Fidel Castro and his revolution. The sources that were available for this study seemed to fall into two general categories: those works written by individuals with personal experience in the revolution and those works written by scholars who researched the revolution, generally through the works of those who had personal experience. The result of this phenomenon is that unbiased sources which address Castro or his revolution are rare. I used two techniques to counter this bias. First, I generally discounted or ignored those sources which tended to be the most biased (i.e., newspaper accounts and periodicals*), and concentrated on published books and research studies. This approach was only partially successful because the majority of the books available on Castro or his revolution were written by journalists. Further, those books not written by journalists often list newspaper accounts as sources. Second, I developed the habit of cross-checking every source, consciously seeking either the opposite viewpoint or commonality for each section of the study. Where appropriate, I have tried to present both sides. In retrospect, I must admit that it is extremely difficult to remain impartial where Castro is concerned. The man was, and still is, a hero to millions of people. Through the course of my research I developed a grudging admiration for him. While I have attempted to keep this paper as dispassionate as possible, I am sure that some of that admiration has filtered through. *Journalists were notoriously pro-Castro during this period. CHAPTER I: BACKGROUND To properly understand Fidel Castro and the insurrection which he led, one must first grasp the essence of Cuba itself. These initial pages will provide a brief overview of the major geographic, demographic and historic factors which have influenced Cuba and its people from the earliest Spanish explorers until Fulgencio Batista's 1952 coup. Geographic 1/ Cuba, situated approximately 90 miles from the southern coast of Florida, is actually an archipelago of more than 1600 keys and small islands clustered around Cuba proper (see Map #1). The island is 745 miles long, and 25 to 120 miles at its narrowest and widest points. It boasts excellent harbors, although only Havana has ever been extensively developed. Cuba enjoys a moderately warm climate with temperatures varying little more than 10-15 degrees between its summer and winter months. The two seasons are differentiated from each other mainly by the level of rainfall, with the rainy season running from May through October. The stable climate is marred only by the island's vulnerability to passing hurricanes. The unusually varied terrain is about 40% mountainous. The Sierra Maestra and smaller parallel ranges dominate the eastern provinces of Oriente and Camaguey. Cuba's highest mountain, Pico Turquino (over 6,500 feet), is located in the Sierra Maestra range. In Las Villas province, in the central part of the island, the Trinidad and Sancti-Spiritus ranges form the so-called Escambray. Lesser ranges are located in western Cuba. The island has no major lakes or rivers. Only eight percent of the land is forested. Cuba's most precious natural resource is probably her land. A red soil, ideal for sugarcane, is prevalent in Matanzas and Camaguey provinces. Mineral resources found in sufficient quantities to mine include: iron, copper, nickel, chromite, manganese, tungsten and asphalt. Since Cuba lacks fossil fuel, its industrial prospects are limited without reliance upon heavy imports.* Demographic 2/ Roughly the size of the state of Pennsylvania, Cuba supported a population of roughly 5,830,000 or 132 people per square mile in 1953. With a growth rate of 2.5 percent, the population had increased to an estimated 6,700,000 by 1960. There were only three other Latin American countries with comparable or higher population densities. *A favorable offshore geological structure may contain large oil and natural gas reserves. In 1953, the population of Cuba was estimated to be 30 percent white (mainly Creole), 20 percent mestizo (racially mixed), 49 percent black and one percent oriental. While a certain degree of racial discrimination and segregation was practiced in Cuba prior to the 1950's, race generally did not play a major causative role in any of the Cuban insurrections. Race, as an issue, was largely overshadowed by the existing class system. The upper-class, which was almost exclusively white, excluded nonwhites from its schools and clubs. Upper-middle-class whites generally avoided any type of contact with nonwhites except as in employer-employee relationships. Nonwhites were usually underrepresented in most professional clubs. Usually the only way nonwhites could gain any social prestige was through memberships in nonwhite societies, labor unions, or the Communist Party. Except for one incident in 1911, there were no serious racial incidents in Cuba by 1953. 3/ Historic 4/ Cuba was discovered and claimed for Spain by Christopher Columbus during his first voyage on October 26, 1492. Quickly settled under the guidance of its first governor, Diego Velasquez, the isle demonstrated a great deal of commerical promise until the mid-16th century. During this period considerable gold was found and farming was developed. After 1550, however, the island's internal development began to falter. Cuba's strategic location guarding the entrance to the Gulf of Mexico became far more important than its commerical value. She simply could not compete with the vast riches being envisioned and discovered further to the west. Consequently, Cuba became the political and military focal point for the Spanish exploration, conquest and colonization of the Caribbean Basin and North America. All Spanish convoys converged on Havana before dispersing throughout the Gulf or massing for the dangerous voyage back to Spain. Throughout the 17th and 18th centuries Cuba maintained her role as the epicenter of Spain's New World interests. The large Spanish population supported an effective militia, and the island was well garrisoned as a major military base. Subsidies from Mexico helped cover this expense as well as other costs connected with administration of the colony. As far as the European inhabitants of the island were concerned, with the possible exception of periodic foreign and pirate attacks, life during this period was good. During the early part of the 19th century, when most of the other Spanish colonies in South and Central America had risen in revolt, Cuba remained loyal. Her largely middle- class population was highly educated, prosperous and almost totally Spanish or Creole (Spaniards born in the New World). While slavery was present and becoming increasingly prevalent with the growth of the sugar cane industry, the large peasant class and/or slave population associated with insurrections in the other Spanish colonies did not exist. Moreover, administration of the island had been relatively liberal and quite benign since the French Bourbons had ascended to the Spanish throne in 1700. This era ended when Ferdinand VII was restored as King of Spain in 1814. Ferdinand's abandonment of the previous Bourbon policies quickly stimulated unrest, and the Cuban government became further centralized and militarized. In 1825, the governor was given extensive repressive powers based on a state of siege that existed following several minor revolts. Initiating a tendency that dogged them through every insurrection until Castro, Cubans in the mid-19th century were slow to revolt largely because they could not agree upon objectives. The desire to preserve slavery, the possiblity of increased trade, and pure intellectual ties led some to favor annexation to the United States. The U.S. Civil War dampened those sentiments, leaving most Creoles (the main source of dissatisfaction) to favor either autonomy, including reforms, within the Spanish Empire, or full independence. When it bacame clear in the 1860's that Spain was unwilling to let autonomy be a viable option, independence became the only realistic revolutionary course. The first major Cuban revolt against Spain began in 1868 and lasted for a decade. This has become known as the Ten Year's War. Led by Carlos Manuel de Cespedes, the Cuban revolutionaires won control over half the island before finally being defeated. The United States played a major role in support of the rebellion by providing the rebels with arms, supplies and a base for propaganda.* While the Pact of Zanjon, which ended the Ten Year's War, guaranteed that Spain would relax restrictions and improve conditions, Creole unrest remained. Small revolts in 1879-1880, 1884 and 1885 also failed. Cuban sugar exports were significantly reduced in 1894 when an increase in the U.S. tariff on sugar was announced. The resulting depression in the island's economy only served to deepen revolutionary fervor. In 1895, a political coalition, led by Jose Marti, renewed the insurrection. Interventionist sentiment and the mysterious sinking of the American battleship Maine in Havana's harbor on February 15, 1898, drew the United States into the Cuban struggle against Spain. The ensuing Spanish- American War marked the beginning of a close, though uneven, relationship between Cuba and the United States that was to continue until Castro's rise to power some 60 years later. With the signing of the Treaty of Paris in 1898, Cuba was placed under the protection of the United States. Minimizing the contributions of the Cuban insurrectionists in the war with Spain, Washington initially refused to recognize the rebel government, preferring instead to occupy and Americanize the island. American occupation continued *During the succeeding decades, the United States was to repeat these roles many times. from 1898-1902 with only marginal success. Although an impressive public school system was initiated and health care standards were enforced, the local population was largely unsupportive of the U.S. presence. Recognizing the long-range futility of American control of the island, the United States initiated steps in June 1960, to establish a democratic government. Local elections for municipal offices were held under the protection of U.S. officials. in September of that same year, 31 delegates were elected to a Cuban Constitutional Convention that drafted the Cuban Constitution of 1901. However, the United States decided not to completely abandon Cuba. With the Congressional passage of the Platt Amendment in March 1901, the United States guaranteed itself the right to intervene in Cuban affairs whenever appropriate. 5/ Fearing an otherwise indefinite occupation, the Cuban Constitutional Convention reluctantly agreed to adopt the amendment as part of its Constitution of 1901. On May 20, 1902, the American occupation ended, and Tomas Estrada Palma, the first elected president of the new republic, took office. It was a day of national happiness tempered by concern that Cuba had not seen the last of U.S. interference. President Estrada Palma was honest and relatively effective. However, the discovery of his underhanded efforts to obtain a second term by inviting the United States to dispose of his political rivals led to a rebellion in 1906. Ironically, while Washington was not initially inclined to intervene based upon Estrada Palma's request, the outbreak of the rebellion forced a quick response. American Marines were dispatched to the island. This newest U.S. intervention, which lasted from 1906-1909, was heavily criticized by Cubans. From this time until 1933, Cuban Presidents and their political parties (Liberals and Conservatives) alternated in power without substantive changes in policy.* Both parties looked to the Platt Amendment as a potential way to avoid political defeat by obtaining U.S. military, economic or diplomatic intervention. The overall impact of Washington's protectionist role is best summarized in the following quote. As successor to Spain, as the overseer of the island's affairs, the United States unwittingly perpetuated the Cubans' lack of political responsibility. Cubans enjoyed the assurance that the United States would intervene to protect them from foreign entanglement or to solve their domestic difficulties, but the situation only encouraged their irresponsible and indolent attitude toward their own affairs and was not conducive to responsible self-government. 6/ Of the several Cuban presidents in office from 1902- 1933, Gerardo Machado (Liberal, 1924-1933) was by far the worst. His reliance on unscrupulous and often brutal tactics to remain in power, coupled with the worldwide sugar market collapse of 1930, aroused broad, popular opposition. As Cuba again teetered on the brink of insurrection, U.S. pressure forced Machado from office in 1933. That August, *Of the six elections held between 1908 and 1933, each of the parties won three. Carlos Manuel de Cespedes was appointed by the U.S. and the Cuban army to succeed Machado. His appointment was short- lived. Revolutionary student groups loosely confederated into an organization called the Directorio, had strongly supported reform through Machado's ouster. To them, Cespedes' regime represented an attempt to slow down the reformist movement that had been gathering momentum since the 1920's. Considering Cespedes merely a stooge of the United States, the Directorio, supported by several minority groups, was relentless in its opposition to the new president. Meanwhile, new unrest within the enlisted ranks of the Cuban army began to erode Cespedes' influence from another direction. Unhappy with both a proposed reduction in pay and an order restricting their promotions, the lower echelons of the army inivited representatives of the Directorio to meet with them at Camp Columbia in Havana on September 4, 1933. By the time the students arrived, enlisted members of the garrison at Camp Columbia had staged the so called "Sergeant's Revolt" and taken command. That same night, Cespedes handed over the Presidency to a five- member commission comprised of students and enlisted members of the Cuban army. The revolt of 1933 has been called the "thwarted revolution" because Cubans looked for, but failed to achieve, a rapid solution to economic and political problems. Their hopes resided in a new and younger group of leaders who believed, not unlike Franklin Roosevelt, that government must take a major role in reform. At the same time they also blamed Cuba's economic problems on the United States. Despite the problems and short duration, the 1933 revolution had a profound impact on subsequent Cuban development and events. University students had experienced political power and had stimulated an awareness among themselves and the general population of the need, and possibility, of rapid and drastic change. In addition, the revolution weakened U.S. domination of the Cuban economy and created opportunities for several sectors previously excluded from gaining a bigger share of the national wealth. Of perhaps the greatest importance was the fact that for the first time the Cuban army became a viable force in the governing of Cuba, and an obscure Sergeant by the name of Fulgencio Batista Y Zaldivar, leader of the Sergeant's Revolt, emerged as the self-appointed Chief of the armed Forces and architect of Cuba's future for many years. Of mixed racial ancestry (Caucasian, Negro and Chinese) and lower-class origin, Batista ruled Cuba from behind the scenes from 1934 to 1940. Acting through a succession of presidents that he personally appointed, Batista managed to secure Washington's agreement to the revocation of the Platt Amendment in 1934. He also supported the drafting of a liberal constitution in 1940, but never saw its precepts enforced while he was in office. The Constitution of 1940 was in many respects the embodiment of the aspirations of the 1933 revolt. For the first time Cuba had a constitution that reflected Cuban ideals and philosophy, rather than that of a foreign power. The president would serve only one term of four years. He could be reelected, but only after remaining out of office for eight years. Many civil liberties and social welfare provisions were defined at great length, and the government would play a strong role in social and economic development. Workers were guaranteed paid vacations, minimum wages and job tenure, with Cuban nationals favored over foreigners in the establishment of new industries. The autonomy of the University of Havana received full sanction, thus fulfilling one of the oldest student demands. Batista was the first president elected under the new constitution. Supported by a coalition of political parties, including the Communists, he assumed office in 1940. His administration (1940-1944) coincided with World War II, with Cuba declaring war on the Axis powers in 1941. Setting aside the 1940 constitution before it had even been executed, Batista declared martial law. Although Batista held wartime powers, his stewardship fell short of dictatorial. He maintained the support of the landed classes by guaranteeing tax concessions, and actively sought the backing of labor. He particularly catered to the left, allowing the communists relative freedom of action in return for their support. Although not particularly popular among the poor and some segments of the working class because of the war taxes he imposed, Batista's initial term as president brought a degree of solidarity and calm to Cuba that had not been experienced in decades. In 1944 and 1948, Batista permitted free elections, remaining discreetly in the background while Ramon Grau San Martin (1944-1948) and Carlos Prior Socarras (1948-1952) sought to fulfill the promises of 1933 and the Constitution of 1940. Unfortunately, neither of these Presidents -- both members of the Autentico (conservative) Party -- was able to completely remove the ubiquitous political corruption or solve Cuba's most serious economic problems. The sometimes stormy eight year period reached a perverse climax when Eduardo "Eddy" Chibas, demagogic leader of the opposition Ortodoxo (liberal) Party, committed suicide while conducting a weekly radio broadcast in 1951. This act was interpreted by many Cubans as a gesture of revulsion at the deplorable conditions that he had long criticized. Despite the turmoil, Cubans had reason to hope that free elections were moving their country toward democratic stability. Batista shattered those hopes, however, on March 10, 1952, when he executed a coup to prevent the approaching presidential elections. Batista had been precluded from running for reelection in either 1944 or 1948 by the Cuban Constitution. Contrary to his reputation, he smugly awaited the 1952 election. in the interim, he occupied himself by manipulating Cuban politics from behind the scenes and managing his business interests in Florida. When he began campaigning for reelection in 1952, however, Batista found that much of his old political support had eroded. Many Cubans still feared him, recalling his ruthless handling of political enemies and dissidents during the 1930's. As it became apparent that his candidacy had little chance of success, Batista called upon the one element of Cuban society that he still controlled -- the arms. Confronted with the spector of a military coup, elected officials decided to flee rather than fight, leaving Batista unopposed. When the shock of this unexpected takeover subsided, all political elements began to search for a way to return to constitutional democracy, but the two main political parties (Autentico and Ortodoxo) splintered, because their leaders could not agree on whether or not to organize armed resistance or negotiate with Batista for elections. Once again the Cuban stage was set for revolution. NOTES Chapter I: Background 1/ Priscilla A. Clapp, The Control of Local Conflict: Case Studies: Volume II (Latin America), (Washington, D.C.: ACDA, 1969), pp. 71-73. 2/ Wyatt MacGaffey and Clifford R. Barnett. Cuba: Its People, Its Society, Its Culture, (New Haven: HRAF Press, 1962), pp. 1835. 3/ Lowry Nelson. Rural Cuba, (Minneapolis: U. of Minnesota Press, 1950), p. 158. 4/ Unless otherwise noted, this historical survey is based on the following three sources: John Edwin Fagg, Cuba, Haiti & The Dominica Republic (Englewood Cliffs, N.J.: Prentice Hall, Inc., 1965), pp. 1-111; Hudson Strode, The Pageant of Cuba (New York: Harrison Smith and Robert Haas, 1934), pp. 3-342; and Jaime Suchlicki, Cuba From Columbus to Castro (New York: Charles Scribner's Sons, 1974), pp. 3- 174. 5/ Strode, op. cit., pp. 343-344. The complete text of the Amendment is cited. The Platt Amendment also gave the United States the right to establish naval bases. Quantanamo Naval Base, acquired in 1903, is a direct result. 6/ Jaime Suchlicki,op. cit., p. 105. CHAPTER II: INDUCING FACTORS TO REVOLUTION Virtually all popular revolutions have had their roots in economic, social, political or military gievances. Cuba was no exception. Chapter I offered a general overview of historic conditions leading to Batista's 1952 coup. From those conditions, Chapter II will isolate and explore the factors which set the stage for Castro's insurrection. Economic 1/ Foreign Control of the Economy. As noted in Chapter I, Spanish administration of Cuba in the 18th and early 19th centuries had been relatively liberal. Economic reform measures instituted during the same period launched Cuba into rapid economic expansion with new sugar markets both in Europe and North America. Cuban economic prosperity ended, however, when Ferdinand VII assumed the Spanish throne and established restrictive trade measures and heavy taxation, all designed to protect Spanish goods from foreign competition. The brunt of these restrictions fell mainly on Cuban landowners. Moreover, Ferdinand's return heralded the resurgence of Spanish control over the Cuban economy, thus creating another source of irritation to Cuban landowners. The combination of these two circumstances eventually drove the landowners to ally themselves with middle-class groups in support of a Cuban independence movement in the 1890's. One consequence of this movement was the Spanish-American War. Following the Spanish-American War, Cuba's economic base remained primarily in agriculture, with sugar the largest cash crop. Commerce maintained a distant second place, while manufacturing and food processing were virtually nonexistent. With the United States emerging from the war as both Cuba's protector and primary trade partner, Washington was under considerable pressure from American business groups to annex the island purely for economic reasons. Although the U.S. government resisted these urgings, the fact that Cuba represented an excellent capital outlet for American investors certainly could not be ignored. Consequently, as U.S. investments in Cuba increased in the early 1900's, the United States increasingly became a stabilizing force in Cuban affairs, as much for the protection of American lives and business interests as for strategic considerations. The controversial Platt Amendment became the political solution to America's duel economic and strategic concerns in Cuba. As U.S. investments rose from $205 million in 1911 to $919 million in 1929, American businessmen often sought the implementation of the amendment for the protection of their interests. 2/ Indeed, most of the military interventions by the United States in Cuba during this period arose from concerns that American economic interests were being threatened. American investments in Cuba began to decline sharply following the 1929 stock market crash. Cuban prosperity of the 1920's was dashed by the world-wide financial crisis that followed. In an effort to stabilize Cuba's economy and rekindle American investments, Washington and Cuba signed a trade agreement in 1934. This agreement reduced U.S. tariffs and sugar quotas, and guaranteed Cuba higher than world market prices for its sugar crop. Between 1935 and 1959, Cuban-American economic ties remained relatively stable. American commitments totaled approximately $713 million by 1954, and between $800 million and $1 billion by 1958. 3/ The United States, as a market for sugar and a source of imports, continued to carry much influence in Cuba. In 1955, for example, the U.S. purchased 73.4 percent of Cuba's total exports, while Cuba obtained 68.9 percent of her imports from the United States. 4/ At that time, sugar and sugar by-products accounted for 79.8 percent of Cuba's exports. 5/ American business interests by the mid-1950's controlled over half of Cuba's public railroads, about 40 percent of her sugar production and over 90 percent of her utilities (telephone and electric). 6/ Cuba's efforts to institute tariffs to protect her fledgling non-sugar industries were generally unsuccessful because Cuban producers, attempting to establish themselves in these areas, were never able to compete with the quality of American-made goods. In summation, it is significant to note that while Cuban-American economic ties brought obvious prosperity to limited segments of the island's population, they did so only at the expense of Cuba's national potential and economic independence. Land Reform. During the early colonial period, 'Spain awarded large land tracts (haciendas) to certain colonists. Spain later attempted to reverse this trend in the late 18th and early 19th centuries by subdividing some of the haciendas into small tracts and selling them in odd sized parcels. Most were purchased by business groups, some remained in the hands of large landholders and a few were bought by small-scale farmers. Unfortunately, economic and technological factors of the time worked to countermand this first attempt at agrarian reform. During the same period, the sugar industry began its rapid expansion into a position of extreme dominance in the Cuban economy. The application of steampower plus other technological advances, both increased the efficiency of sugar production and opened new markets. The Cuban sugar industry received its greatest boost in the late 19th century when a precipitous drop in steel prices made the construction of railroads on the island financially feasible. Until then, the antiquated means of transporting sugar cane had limited mill size and production. Railroads greatly increased the territory that an individual mill could support. This breakthrough convinced most sugar corporations that the way to increase profits was to assume control over all aspects of sugar production, from the field through the mill. Accordingly, throughout the late 19th and early 20th centuries, these corporations launched major efforts to acquire their own land. Small sugar cane farmers were largely eliminated in this process. Those that resisted were either coerced to sell or bludgeoned with he Cuban legal system until hey acquiesced. As a result, although sugar production increased markedly between 1877 and 1915, the number of mills decreased from 1,190 to 170. Meanwhile, the sugar corporations and large landowners increased their control of Cuba's agricultural land to over 76 percent. 7/ Throughout the 1920's, and particularly during the Depression, various Cuban political groups agitated for agrarian reform. Their demands concentrated primarily on greater government control over the sugar corporations and redistribution of the large estates among the landless. These very issues were among the major causative factors leading to the 1933 Revolution and the ouster of President Machado. A series of sugar control acts enacted during the mid and late 1930's followed, but the politically powerful landowners and sugar corporations saw little actual loss of control. A provision in the 1940 Cuban Constitution designed to fragment the large estates was equally ineffective since it was never enforced. President Batista and his successors valued the support of the wealthy land- owners too much to alienate them by executing this particular law. Agrarian reform remained a major issue into the 1950's when Fidel Castro used the simple appeal that those who farmed the land should own it. However, Castro's position on agricultural reform did not gain him significant support from the Cuban peasants until the end of his revolt. Many peasants never had an opportunity to hear his ideas until the later phases of the conflict, and those that did usually did not understand them because they were couched in such heavy revolutionary rhetoric. Nevertheless, Castro continued to propagandize the evils of Batista's agricultural policies throughout the revolt, using his own reformist ideas as part of his revolutionary platform. Unemployment. Because of Cuba's one crop economy, thousands of Cubans faced several months of unemployment every year. Potential full employment existed only 4 to 6 months of the year when sugar was being harvested and brought to the mills. The spector of a bad harvest or low world demand for sugar only served to compound the problem. Sugar workers, unable to find other work during the off- season, were reduced to living on credit or asking for handouts to survive. This situation was more severe in the rural areas where alternate jobs were not available. Moreover, rural workers frequently migrated to the cities in search of employment, where they helped worsen the situation in urban areas. They often settled in city slums, usually becoming a source of political unrest and agitation. U.S. Department of Commerce figures indicate that 400,000 - 450,000 workers (over 20 percent of the work force) were unemployed during the 1952 off-season. At the peak of the 1953 harvest, 174,000, or eight percent, were still looking for work. 8/ Labor fared very badly during the late 1920's and early 1930's when Cuba's economy was in the doldrums. Government attempts to stabilize the economy often eliminated the few existing worker protections. Labor unions were illegal, and labor organizers were often prosecuted and imprisoned. Conditions became so intolerable by 1933 that many workers struck against the government, helping to overthrow President Machado. Batista eventually checked the revolutionary tendency of the labor movement by legalizing labor unions and promising concessions to their organizers. 9/ Urban labor conditions improved dramatically under Batista's tutelage, as many social and labor measures (minimum wage, vacations, bonuses, working hours, medical benefits) were incorporated into the 1940 Constitution. However, labor conditions for rural workers remained largely unchanged as the unemployment situation was not realistically addressed. Ironically, the advent of labor unions and their often excessive demands tended to lead many companies into bankruptcy and stymie the growth of others. Obviously, such instances only served to deepen the unemployment situation. Social Class System. Not unlike most countries emerging from colonial rule, Cuba had a fairly well entrenched class system, loosely defined by economic stature. The upper- class consisted of the landed and moneyed class, owners of businesses and plantations, remnants of the old elite or self-made individuals who had amassed their wealth through a combination of business and politics. At the opposite end of the spectrum was the lower-class, most who made their living in the fields and factories of the country. Professionals, small merchants, army officers and government workers occupied the levels between the above extremes, and generally comprised the middle-class. Upward mobility from the lower-class, especially the rural lower-class, was difficult at best. The period 1933- 1959 saw some improvement in the lot of urban workers because of the aforementioned labor movements. In isolated instances, industrial workers or their offspring were able to move into the lower-middle-class through education opportunities. Interestingly, social conditions during the 1956-1959 insurrection against Batista were considerably better than they had ever been. General worldwide prosperity, plus the demands of the Korean War, kept Cuba's sugar exports high. During this same period wage earners were receiving the biggest share of the national income they had ever experienced, 65 percent between 1950 and 1954. Per capita income, while not high by U.S. standards, averaged $312 per year, ranking as one of the highest in Latin America at the time. 10/ Consequently, while Cuba was certainly no economic paradise, it is not surprising that the insurrection garnered little support from the well- organized, politically-influential labor unions and the industrial workers they represented. Middle-class intellectuals, on the other hand, unhappy with their economic position and frustrated by their inability to breach the political power held by the upper- class, were a frequent source of revolutionary spirit in Cuba.* In Cuba, upward mobility was marked by education; lower-class and lower-middle-class aspirations were fueled by it. A good education leading to a degree as a lawyer, doctor or teacher was virtually the only way an individual could hope to improve his economic position. Paradoxically, the undiversified nature of the Cuban economy often forced these newly trained, middle-class professionals to settle for occupations far below the levels for which they were qualified. This under-employment was a constant source of frustration for individuals so afflicted; it not only cost them wealth, but more importantly, deprived them of the *This is not unique to Cuba. For example, the American, French, and Russian revolutions all had their roots in the middle-class. prestige they thought they deserved. Inevitably, these professionals were frequently dissatisfied with their society and often sought to change it, usually through some sort of revolutionary movement. Thus, every Cuban insurrection from the mid-1800's onward was lead by individuals and segments from the middle-class who had the intellectual ability, education, and skills to provide the appropriate organization, leadership, articulation of goals, and action.* The impracticality of many of the idealistic programs espoused by the middle-class doomed most of their movements to failure even before they had started. Their general lack of ability to institute those programs on which they had risen to power only seemed to hasten the return of other, more oppressive forms of government. The frustration and disillusionment resulting from their failures usually laid the ground work for future movements. Understandably, the cyclical nature of these revolutions and counter-revolutions tended to destabilize and fragment Cuban politics from the 1860's to 1959. Urbanization. The rapid expansion of the sugar corporations in the early 1900's gave tremendous impetus to urban growth. As small landowners and tenant farmers were *Even Castro's organization, which purported to have strong rural roots, actually had very little active support from the rural lower-class until the last days of the insurrection. It was, in fact, composed almost totally of members of the middle-class. displaced by land appropriation schemes and mill modernizations, they began to seek other jobs and higher wages in the cities. Thus, by 1953, residents of Cuban cities and bateyes (small communities established near sugar mills) accounted for 57% of the total population. 11/ Disappointed by what they found upon arrival in the cities, these new urban migrants formed the core of the labor movement that ousted President Machado in 1933. The new government established by Batista took a much greater interest in their plight. Legislation was passed that offered these new urbanities more security than they had had previously in either the city or the country. As a result, the guarantees provided a stabilizing force in the Cuban society; urban workers generally supported the incumbent government, as evidenced by their refusal to join the called general strikes against Batista in the mid-1950's. Thus, contrary to classic Marxist theory, the rural Cubans who elected not to move toward urbanization were the most susceptible to Castro's appeals. This occurred for one basic reason: the rural poor generally did not benefit from the economic improvements their city brethren had won. Political Latin American constitutions have often been filled with idealistic goals which in reality were too difficult to attain. The Cuban Constitutions of 1901 and 1940, and subsequent revisions of the electorial codes, were no exceptions. All were written in such a way as to allow wide popular participation in the electorial process. However, political realities such as "personalismo" (personality cults), jealous rivalries, unlawful political pressure, and occasional applications of force, kept Cuba's political process in constant turmoil. Political parties not in power were suspicous of the incumbent's promises and intentions; ruling and opposition parties, usually loyal to leaders rather than ideals, frequently splintered; and alliances between political groups for purely practical reasons (usually political survival) seldom endured. 12/ The government of Cuba, sustaining the Spanish tradition, was rarely free of graft between 1902 and 1959. Its primary function as a means by which politicians could achieve wealth and status understandably made incumbents reluctant to reform the system which perpetuated their own longevity and interests. Consequently, even Cuba's most elementary economic and social problems were seldom addressed, and constitutional processes and provisions were usually bypassed or ignored. With this type of political climate engrained in Cuban society, it is not dificult to understand how Batista was twice able to easily grab power. Military Since their inception following independence in 1902, the Cuban armed forces, to include the police, were organized to control internal disorders rather than fight major battles or wars. It is not an exaggeration to say that whoever controlled the military, controlled the government. When Batista seized power in 1933, for example, he owed his success to the armed forces. In turn, they eventually owed their wealth, position and privileges to him. Batista never forgot his military roots and continually nurtured the support of the military even after he left office in 1944. Although Presidents Grau San Martin and Prio Socarras each altered the composition of the high command to install men more loyal to themselves, no serious effort was made to undermine the basic military structure or budgetary support that Batista had carefully built. Consequently, when Batista decided to stage his 1952 coup, Cuba's armed forces were quick to help reestablish their benefactor. By the mid-1950's the Cuban armed forces had become a class unto themselves. They were superior in numbers and weapons to any opposition force. 13/ They influenced every segment of Cuban society and were more powerful than any political party. Over time they had grown to represent everything that was repressive about Batista's government, because they were the enforcers of his policies and purges. Castro eventually came to realize that this symbiotic relationship between Batista and his armed forces made political, social or economic change impossible unless one resorted to violence. NOTES Chapter II: Inducing Factors to Revolution 1/ Unless otherwise noted, material on the economic background of Cuba is from: Robert F. Smith. The United States and Cuba: Business and Diplomacy, 1917-1960, (New York: Bookmen Associated, 1960). 2/ Foreign Area Studies Division, Special Warfare Area Handbook for Cuba, (Washington, D.C.: SORO, 1961), p. 503. 3/ Ibid., p. 37. 4/ Smith, op. cit., p. 166. 5/ U.S. Department of Commerce. Investment in Cuba: Basic Information for United States Businessmen, (Washing- ton, D.C.: GPO, 1956), p. 139. 6/ Ibid., p. 10. 7/ Smith, op. cit., p. 175. 8/ Department of Commerce, op. cit., p. 23. 9/ Lowry Nelson, Rural Cuba, (Minneapolis: U. of Minnesota Press, 1950), pp. 88-92. 10/ Department of Commerce, op. cit., p. 184. 11/ Ibid., p. 178. 12/ Foreign Area Studies Division, op. cit., p. 356. 13/ Adrian H. Jones and Andrew R. Molnar. Internal Defense Against Insurgency: Six Cases, (Washington, D.C.: SSRI, The American University, 1966), p. 69. By the late 1950's, Cuba's armed forces, to include police and para- military, numbered between 30,000 and 40,000 men. They were considered to be well-armed, at least in relation to their traditional role. Their equipment included tanks and half- tracks, both of which were periodically used against Castro's insurgents. The Cuban Air Force had about 65 aircraft, including both fighters and bombers. CHAPTER III: CASTRO'S INSURRECTION Castro's insurrection began in July 1953 with his attack on the Moncada Fortress in Oriente Province, and ended in January 1959 when President Batista was forced to leave the country. During the intervening years, Fidel Castro planned, organized and executed a guerrilla war that brought about the defeat of one of the largest and best and well-equipped armed forces in Latin America. Chapter III provides a chronological account of that period beginning with brief biographical sketches of the two primary antagonists. The Sergeant and the Attorney In the tradition of the Spanish, Cubans have long sought to choose their leaders based on the cults of "personalismo" (personality) or "caudillo" (charismatic leader). Perhaps the two finest examples of that tradition are the two Cubans who shaped Cuba's destiny from 1933 to the present: Fulgencio Batista Y Zaldivar and Fidel Castro Ruz. A better understanding of their power struggle during the 1950's can be acquired if background on their origin is noted. The two men facing each other in the Cuban ring were completely different both physically and mentally. Batista was fifty-two and Castro twenty- seven when the attack on the Moncada took place. The President was short, with an olive complexion and mestizo features, while his opponent was tall, athletic and fair skinned. Batista was an ordinary soldier, though he promoted himself from sergeant to general. Castro was a lawyer, more interested in social causes than in bourgeois litigation. The President had been born in Oriente, like Castro, but while Batista came from a very humble home, the rebel had been born into a comfortably-off landowning family. 1/ Batista. 2/ The son of Belisario and Carmela Batista, Fulgencio Batista was born in the sugar town of Banes in early 1901. His parents were peasants and descendants of the Bany Indians of Oriente province. His father had been a sergeant in the Cuban Army of Liberation and fought against the Spanish during the Spanish American War. Belisario Batista began work in the early 1900's for the United Fruit Company as a cane cutter, and Fulgencio learned from his father the rigors of dawn to dusk work in the fields. Intent on receiving an education, Fulgencio attended both public night school and a day school ("Los Amigos") run by American Quaker missionaries. At night school he learned to read and write Spanish; at Los Amigos he mastered speaking and writing English. By the time he was 20, Batista had held jobs as a cane-cutter, wood cutter, store attendant, planter, carpenter and railroad brakeman. In classical Marxist terms, his class origins made him an excellent prospect to become a communist revolutionary. At age 20, Batista enlisted in the army to gain experience and see the world. He was initially assigned to the Fourth Infantry Division based at Camp Columbia in Havana. At first, he planned to use his free time to train as an attorney, but discovered that he had to have a high school diploma. Undaunted, he enrolled in the San Mario academy night school to become a speedtypist and stenographer. In 1923, Batista passed his examination for corporal and in 1926, that for sergeant. Upon promotion to sergeant he was assigned as a recorder to the Councils of War of the Cuban War Department. While there, he discovered and quickly assimilated the arts of political power and class privilege. Educated in the full range of the human condition in the Cuba of the early 1900's as few men were, Batista saw his chance for power and influence after the ouster of President Machado. Without hesitation, he led soldiers, corporals and sergeants in a revolt against their military superiors. Batista's mutiny was supported throughout the armed forces. Corrupt officers made its success inevitable. On September 4, 1933, Batista was turned overnight into a Colonel, and Chief of the Cuban Armed Forces. As Chief of the Cuban Armed Forces, Batista soon realized the power of his position. President Machado had resigned as had his U.S. appointed successor, President Cespedes. The five member commission that ruled the country (of which Batista was a member) was having a very difficult time reestablishing the government. After several weeks of watching the new government struggle, Batista finally seized upon the situation, used his military position to ensure success and assumed de facto control of the government. He believed that Cuba had discarded its colonial status only to become a pawn of foreign capitalism. Advocating sweeping social, economic and political reforms, he tried to build the Cuba the 1898 revolution had envisioned. The Cuban Constitution of 1940 reflected most of Batista's ideas, although, like the previous Cuban Constitution of 1902; it was more idealistic than practical. Elected president in 1940, Batista never really had a chance to enact the constitution he supported. War time powers temporarily set aside constitutional guarantees until 1944, when his term ended. Consequently, most of the long awaited reforms sought during the 1933 revolution had to wait until 1945 to be instituted. While the 1940-1944 period was not particularly oppressive from economic and social viewpoints, a considerable amount of political division arose about what was best for Cuba. Through it all, Batista emerged remarkably unscathed. He had become an inspiration to the poor because of his humble beginnings and "bootstraps" rise to power, and an idol to his soldiers because he had lifted them from poverty through rapid promotions, increased salaries and benefits, and no modicum of class privilege. In 1952, Batista knew that his chances for reelection were poor; he was generally not attuned with Cuban politics after having spent several years living in the United States managing his business interests, and was running seriously behind in the polls. However, he also knew that his strong- man image could easily frighten the incumbent government as well as gain the unconditional support of the regular army. Batista correctly guessed that wealthy businessmen, peasants and workers would not be threatened by any coup he led. With little fanfare, Fulgencio Batista entered Camp Columbia, the principal garrison of Havana, on March 10, 1952. in less than 12 hours he had deposed President Prio Socarras and assumed control of the government; not a shot was fired. He swept aside the prevailing political parties since none was led by anyone who had the political seniority or wherewithall to oppose him. Yet, for all his political astuteness, Batista made one mistake; he underestimated the mental frame of mind of a generation of young, middle-class Cubans who were tired of political cynicism and ready for a fresh revolutionary start. Batista has stated that he returned to politics and staged the 1952 coup because he was the only Cuban leader who could restore the country to the path directed by the 1946 Constitution. This altruistic rationale is arguable for two reasons. First, Batista's support for the 1940 Constitution was always closely aligned with some sort of political gain that helped solidify his power. World War II conviently precluded him from ever having to make good his support. Second, Batista's actions following his 1952 coup were generally not those of a man interested in promoting the general welfare of his constituents. While he had numerous opportunities to install some of the political and land reforms that the Constitution guaranteed, he instead chose to provide the country with cronyism, repression and corruption. The idealism that Batista espoused in the 1930's was replaced by personal aggradizement in the 1950's. Ironically, while he had been viewed by many as a caudillo (charismatic leader) in 1933, he was seen as just another usurper in 1952. Castro. Fidel Castro Ruz was born on August 13, 1927, in Biran, Oriente province, about 40 miles form Batista's birthplace. 3/ His father, Angel Castro Y Argiz, was a Galician who had come to Cuba as a soldier with the Spanish army in 1898. Upon demobilization, Angel Castro elected to stay on the island, subsequently working for the United Fruit Company that also employed Belisario Batista. Unlike Belisario, Angel became an overseer for United Fruit and in 1920, sold the company a strategic piece of land, for which he was handsomely paid. That sale maked the beginning of Angel Castro's prosperity and eventual movement into the upper-middle-class. In Fidel Castro's own words: I was born into a family of landowners in comfortable circumstances. We were considered rich and treated as such. I was brought up with all the privileges attendant to a son in such a family. Everyone lavished attention on me, flattered, and treated me differently from the other boys we played with when we were children. These other children went barefoot while we wore shoes; they were often hungry; at our house, there was always a squabble at the table to get us to eat. 4/ By his first marriage, Angel had two children, Lidia and Pedro Emilio. Following the death of his first wife, he married his house servant, Lina Ruz Gonzales, by whom fathered seven more: Angela, Agustina, Ramon, Fidel, Raul, Ernma and Juana. Fidel and those born before him were actually illegitimate as Angel did not marry Lina until sometime after Fidel's birth. 5/ Of Fidel's eight brothers and sisters, only Raul, who linked his fate to Fidel from the beginning, was to play an important part in Cuban affairs. At age seven, Fidel began his primary education at the Colegio La Salle, a Jesuit school in Santiago de Cuba. He later attended the Colegio Dolores, also a Jesuit institution, from which he graduated in 1942. That same year, at age sixteen, Fidel enrolled at the Colegio Belen in Havana, the most exclusive Jesuit school in the country. At Belen his best subjects were Agriculture, Spanish and History. In 1944, he was voted "the best school athlete." 6/ Fidel graduated the next year. In his school yearbook it was noted: 1942-1945. Fidel distinguished himself always in all subjects related to letters. His record was one of excellence, he was a true athlete, always defending with bravery and pride the flag of the school. He has known how to win the admiration of all. He will make law his career and we do not doubt that he will fill with brilliant pages the book of his life. He has good timber and the actor in him will not be lacking. 7/ A prophetic description indeed, but wrong on one count; revolution and the leaderhip of Cuba, instead of law, would become Fidel's vocation Castro entered the University of Havana in the autumn of 1945. As his school yearbook had predicated, he chose law as his course of study. Of Fidel's early university career Theodore Draper observed: Fidel Castro was a classic case of the self-made rich man's son in a relatively poor country for whom the university was less an institution of learning or a professional-training school than a nursery of hothouse revolutionaries. He chose a field of study in which the standards were notoriously low, the pressure to study minimal, and his future profession already overcrowded. Since he did not have any real needs to satisfy in the school, did not respect his teachers, and could get by on his wits and retentive memory, he was easily tempted to get his more meaningful and exciting experiences in extra-school political adventures. 8/ Starting a political career while still a young man was somewhat of a Cuban tradition, so Fidel was not particularly unique. Perhaps what did make him standout, however, was the intensity with which he pursued political goals. In Fidel's words: From all indications, I was born to be a politician, to be a revolutionary. When I was eighteen, I was, politically speaking, illiterate. Since I didn't come from a family of politicians or grow up in a political atmosphere, it would have been impossible for me to carry out a revolutionary role, or an important revolutionary apprenticeship, in a relatively brief time, had I not had a special calling. When I entered the university, I had no political background whatsoever. Until then I was basically interested in other things, for instance, sports, trips to the countryside -- all kinds of outlets that provided an outlet for my unbounded natural energy. I think that is where my energy, my fighting spirit, was channeled in those days. At the university, I had the feeling that a new field was opening for me. I started thinking about my country's political problems -- almost without being conscious of it. I spontaneously started to feel a certain concern, an interest in social and political questions. 9/ Only two years after he first enrolled at the University, Fidel became involved in with an attempted coup d'etat. In 1947, he joined a group of revolutionaries who were planning the overthrow of the Dominican Republic's dictator, General Rafael L. Trujillo. The exiled Dominican, General Juan Rodriquez, was paying the expenses, and the invasion had the tacit support of Cuba's President Ramon Grau San Martin. While final preparations were being made in Oriente Province, the Dominican delegate to a meeting of the Ministers of Foreign Affairs of the Pan-American Union in Petropolis, Brazil, accused the Cuban government of mounting an invasion of his country. Documentation of his accusation clearly showed that the security of the plan had been broken. Grau San Martin, embarrassed that these covert plans had been discovered, ordered the Cuban Navy to intercept the would-be expeditionaries. The Navy did manage to apprehend the group at sea, but Fidel was able to escape by swimming to shore with a tommy-gun slung round his neck. This setback, like others Castro was to experience in the future, only seemed to inspire Fidel with more political dedication. The planned coup was really nothing more than a personality shift. While Trujillo's regime was extremely repressive, Rodriquez did not offer much of an alternative. Grau San Martin, as a democratic reformist, apparently supported the coup because of its potential for change. Although little is known about Castro's reasons for joining the expedition, he evidently did so out of a sense of adventure. At the University he quickly acquired the reputation of a rabble-rouser. He frequently spoke out against repression, communism and dictatorships. Fidel already saw himself as the champion of the oppressed; the Dominican Republic expedition was an extension of that fervor. His participation was particularly significant because it marked the first time that he became actively involved in a revolutionary cause. Castro returned to the University. However, less than a year after the abortive attempt to overthrow Trujillo, he was on the way to Bogota, Colombia, as a delegate to the Anti-Colonist, Anti-Imperialist, Student Congress that was assembling to demonstrate at the 9th Conference of the Pan- American Union.* On the opening day of the conference, the popular Colombian Liberal Party leader, Jorge Gaitan, was murdered while on his way to make a speech. His assassination enraged liberal student groups who quickly began to take violent actions against Gaitan's enemies. With the Pan-American Conference in disarray and the Colombian *The Student Congress was sponsored by Colombian liberal leaders (non-communists) who wanted to see less American influence in Latin America. capital on the verge of total anarchy, Colombia's President called the waring factions together and secured an agreement to end the fighting. Student groups were accused of instigating the disruption, so Castro and his delegation were forced to take refuge in the Cuban embassy. They were later smuggled out and returned to Cuba aboard an aircraft that was transporting cattle. Castro's role in the "Bogotazo," as the riot became known, has apparently never been clearly defined, other than to say that it matched his previous pattern of supporting liberal causes. After his return to the University, Castro married fellow-student, Mirtha Diaz Balart. They then had a son, Fidelito, born in 1949. Fidel became President of the Association of Law Students that same year and eventually graduated with a law degree in 1950. Following graduation, Fidel established a law partnership with two other attorneys. However, his proclivity to accept cases with social or political notoriety brought him little monetary reward, although they did gain him considerable publicity. Meanwhile, Castro was attracted by the anti-government corruption platform of Eduardo "Eddy" Chibas and his Ortodoxo Party. Fidel joined the Party and shortly afterwards became a Congressional candidate for one of the Havana districts in the approaching 1952 elections. He was precluded from ever actually standing for the June 1st election by Batista's March 10th coup. Immediately following Batista's coup, activists in Havana began to plan his ouster, Fidel Castro among them. He first appealed to the Court of Constitutional Guarantees on the ground that the dictator was violating the 1940 Constitution. A few days later he petitioned the Emergency Court of Havana on the same grounds, noting that Batista had so undermined and violated the Cuban Constitution that he was liable to serve over 100 years in prison. Only the Court of Constitutional Guarantees responded, rejecting Castro's petition by noting that "revolution is the fount of law" and that since Batista had regained power by revolutionary means, he could not be considered an unconstitutional President. 10/ As a "former" radical student organizer and Congressional candidate, Castro was under periodic government surveillance. Nevertheless, frustrated by Batista's coup and his fruitless legal attempts to countermand it, Fidel joined with Abel Santamaria Cuadrado to form a loose revolutionary organization of approximately 200 students.* Their first priority was to get weapons. Over the course of the next few months they purchased shotguns and .22 caliber semi-automatic rifles at various armories. At the same time, they began planning and training for a raid on one of the regular army's garrisons. *Santamaria was an accountant employed by General Motors (Pontiac) of Cuba and assistant editor of El Acusador, a revolutionary paper. Their plan was to seize the type and number of heavy weapons and ammunition they would need to carry-out an effective insurrection. After months of preparation, Castro and Santamaria decided to attack the military garrisons of Santiago and Bayamo in Oriente province. Moncada Fortress in Santiago was to be the main target, with the attack on the army post in Bayamo a diversion. Fidel did not intend to occupy Moncada, only to seize the weapons and ammunition in the armory, and withdraw. Ninety-five rebels were allotted to the task. Armed only with shotguns and .22 caliber rifles, and dressed in Khaki uniforms to blend with the regular forces, Castro's men relied heavily upon surprise. Once they had established control over the two garrisons, Fidel hoped the regular troops would join the anti-Batista movement. He planned to distribute the weapons to the revolutionary supporters that he envisioned were everywhere, thus presenting Batista with a fait d' accompli. Moncada 11/ The attack led by Fidel Castro on the Moncada Barracks in Santiago de Cuba on July 26, 1953, has a similar significance for the Cuban Revolution as the fall of the Bastille eventually had for the French Revolution. In both cases, their signifi- cance was symbolic, not practical, and they were made important by the events that came after. 12/ At 5:15 a.m. on July 26, 1953, the attempt to seize Moncada Fortress began. After staging at a farm just outside Santiago, Castro's advance force surprised the sentries at one of the gates and entered the fort undetected. Unfortunately, the main body of the force was not so lucky. Because of poor reconnaissance, the majority of the rebels were unfamiliar with either the layout of the fort or the streets of Santiago. Approaching in small groups or in cars, several lost their way without even reaching the fort. Castro's car accidently came face-to- face with an army patrol; the firefight that resulted alerted the rest of the barracks. The one group that did penetrate the fort found themselves occupying the barber shop rather than the armory. Realizing that futher attack was hopeless, Fidel ordered a withdrawal. Remarkably, until this point, only seven rebels had become casualties, against some 50 soldiers. In the military pursuit that followed, however, approximately 70 rebels were killed (including Abel Santamaria), with most of the casualties occurring after they had surrendered. The small group that had attempted to capture Bayamo were equally unsuccessful. Over half of the group of 27 rebels who were captured, were shot. Fidel, his brother Raul, and a few others managed to escape to the Sierra Maestra Mountains. They were eventually captured there, but not harmed because of the compassion of Lieutenant Pedro Manuel Sarria who knew Fidel from their university days. In the subsequent trials, Castro and the other survivors were sentenced to prison on the Isle of Pines for terms ranging from six months to 15 years. During the trial, Fidel spoke in his own defense for five hours, voicing a general program of reform. He concluded his remarks with: I have reached the end of my defense, but I will not do what all lawyers do, asking freedom for the accused. I cannot ask that, when my companions are already suffering in the ignominious prison of the Isle of Pines. Send me to join them and to share their fate. It is understandable that men of honor should be dead or prisoners in a Republic whose President is a criminal and a thief ... As for me, I know that prison will be harder for me than it ever has been for anybody, filled with threats, ruin and cowardly deeds of rage, but I do not fear it, as I do not fear the fury of the wretched tyrant who snuffed out the lives of seventy brothers of mine. Condemn me. It does not matter. History will absolve me. 13/ Following the trials and flushed by his victory, Batista restored constitutional guarantees and lifted censorship. For the first time, the Cuban people were able to hear the uncensored version of what had happened at Moncada and the accusations of Fidel Castro at the trials. As a result, Castro became somewhat of a martyr among the anti-Batista forces. Batista, meanwhile, ordered new elections for November 1, 1954, offering himself as a candidate with his own election platform. The main opposition candidate was former president Ramon Grau San Martin. Initially subsidized and encouraged by Batista, Grau San Martin became convinced that Batista had no intention of holding fair elections. Forty-eight hours before the election he withdrew his candidacy, leaving Batista virtually unopposed. Batista won easily, and on February 24, 1955, began a new four-year term-of-office as Cuba's President. Movimiento 26 de Julion (26th of July Movement) Isle of Pines. The convicted survivors of Moncada, among them Fidel and Raul Castro, were imprisoned on the Isle of Pines in October 1953. The Moncada group resolved to continue their revolutionary plotting while incarcerated, and reorganize and train in Mexico upon their release. Moncada, they reasoned, had made them national heros, and martyrs; to give up short of victory would shame those who had already died. Shortly after his arrival, Fidel organized a school in which he was the sole instructor. He found a willing group of students among the veterans of Moncada and the uneducated peasants who were being held prisoner. Fidel's classes ranged from history to philosophy, encompassing contemporary politics and social issues along the way. He even taught weapons training (without weapons). The school passed the time and gave him an excellent opportunity to keep the revolutionary spark alive and plan for the future. Schools of this sort were not uncommon at the Isle of Pines because the prison was used primarily for political prisoners. The prison was considered to be a "minimum security" installation; prisoners were often left to their own devices. However, Castro's school stirred such strong revolutionary fervor among the inmates that it was eventually closed down, and Fidel was placed in solitary confinement. 15/ Meanwhile, with Batista's position again assured after the 1955 elections, several political leaders demanded the release of political prisoners. Batista at first turned a deaf ear. However, once approved by the Cuban House of Representatives, Batista relented and granted amnesty to all political prisoners on May 13, 1955.* Two days later, Fidel Castro and the other survivors of Moncada were released. The Moncada veterans were greeted by their families and friends at the prison gates, and welcomed throughout the Isle of Pines. ** A warm reception followed at the railway station when they arrived in Havana. Everyone wanted to see and hear Fidel Castro. Both the radio and television services were after him and flattering offers ... 16/ It was not long, however, before Castro was reinforced in his conviction that he would never unseat Batista if he remained in Cuba.*** Every attempt he made to address the people at organized rallies, on television or on the radio *He justified his decision on the grounds that these men no longer posed a threat to his power following the elections. **Fidel's wife and son were not present. She and Fidel had become estranged several years before, partially because Fidel could not support her in the upper-middle-class style with which she was accustomed, and partially because her brother was a close friend and confident of Batista. ***Castro had actually made the decision to leave Cuba while still in prison. was thwarted. Finally, the combination of government suppression and his uncertain leadership position within the Ortodoxo Party induced Fidel to leave for Mexico in July 1955, thus implementing the plan he had formulated in prison. 17/ In a letter to a friend just prior to his departure, Castro observed: I am packing for my departure from Cuba, but I have had to borrow money even to pay for my passport. After all, it is not a millionaire who is leaving, only a Cuban who has given and will go on giving everything to his country. All doors to a peaceful political struggle have been closed to me. Like Marti,* I think the time has come to seize our rights instead of asking for them, to grab instead of beg for them. Cuban patience has its limits. I will live somewhere in the Carribean. There is no going back possible in this kind of journey, and if I return, it will be with tyranny beheaded at my feet. 18/ Exiled in Mexico. Raul Castro and most of the other Moncada rebels were waiting for Fidel in Mexico when he arrived.** They had gone ahead after their release from prison, and were already engaged in preparations to invade Cuba from Mexico. Not long after his arrival in Mexico, Fidel solicited and secured the services of two men who were to prove invaluable: Ernesto "Che"*** Guevara and "Colonel" *Jose Marti: Cuban liberator and national hero. Died in 1895 while fighting to free Cuba from Spain's domination. Primary influence, from Cuban perspective, behind Spanish- American War. Reagan adminstration plans to establish "Radio Marte" broadcasts to Cuba, named in his honor. **A few remained behind to establish underground activities in Cuba and prepare for Castro's return. ***Nickname given Guevara by the Cubans while they were training in Mexico. It means "mate." Alberto Bayo. Both would be lieutenants to Castro in the coming years. Ernesto "Che" Guevara. Ernesto Guevara was born in Cordoba, Argentina, in 1928. The son of an architect- engineer, he was raised as a firm member of the middle-class in Buenos Aires. As a youth, Guevara showed himself to be a carefree, unconventional and tireless boy. Only his asthma, which was to plague him throughout life, slowed him down. Early-on he displayed a deep interest and concern for the plight of the common Argentinian, often preferring the company of members of the lower-classes to those of his own socio-economic level. Contiguously, he began to develop a deep and unabiding hatred of the upper-classes in Latin America. Guevara entered medical school in 1947, graduating in March 1953. Following graduation, he left Argentina to visit parts of South and Central America. He first heard of Castro and his raid on Moncada while in Costa Rica. By early 1954, Guevara had become involved in an unsuccessful countercoup attempt in Guatamala. A marked man, he took refuge in Mexico; a few months later, he met Fidel Castro and decided to join his movement. 19/ Alberto Bayo. Alberto Bayo was a former acquaintance of Fidel's who was in exile from Spain. Bayo was born in Cuba in 1892, but had migrated to Spain. He studied military tactics at the Spanish Infantry Academy, and fought as a member of the Spanish army for eleven years in Spanish Morocco. Having also seen extensive service as a Loyalist during the Spanish Civil War, Bayo was generally regarded to be an expert on guerrilla warfare. Castro considered him to be the ideal person to train an expeditionary force for the return to Cuba. 20/ Castro's Revolutionary Platform. The first documented evidence of Castro's revolutionary platform is contained in the transcript of the trial which followed his ill-fated attack on the Moncada Fortress. Addressing the court in his own defense, Castro set forth the problems his revolution aimed to solve: land, housing, industrialization, unemployment, education and health, as well as the restoration of public liberties and political democracy. Although Castro delivered his speech in private, his followers later published it in full and widely distributed it. The document was entitled History Will Absolve Me. It contained the text of Castro's speech at the trial and listed the five basic revolutionary laws upon which Castro planned to rebuild Cuba once Batista was defeated: 1. Assumption of all legislative, judicial and executive authority by the Revolution itself, pending elections, subject to the Constitution of 1940. 2. Land for the landless, through the expropriation of idle lands, and through the transfer of legal title from big owners, renters and landlords to all sharecroppers, tenants and squatters occupying fewer than 165 acres -- the former owners to be recompensed by the state. 3. Inauguration of a profit-sharing system under which workers employed by large industrial, commerical and mining companies would receive 30% of the profits of such enterprises. 4. Establishment of minimum cane production quotas to be assigned to small cane planters supplying a given sugar mill, and the assignment of 55% of the proceeds of the crop to the planter, against 45% to the mill. 5. Confiscation of all property gained through political malfeasance or in any other illict manner under all past regimes. 21/ Over the course of his insurrection, virtually all of Castro's speeches and proclamations referred to these revolutionary laws. Their real beauty was their adaptability. Castro frequently altered their priority, percentages and/or scope to suit his audience and strategy. Depending upon how the laws were presented or explained, they had almost universal appeal. Castro's incessant manipulation of these laws was not without its drawbacks, however; the average Cuban often found it difficult to understand just what Castro's rebellion was about. This was particularly true of the peasants. In fact, Castro's own followers sometimes had problems staying abreast of their leader's ideas. Prelude to Invasion. Between late 1955 and early 1956, Castro amassed the nucleus of an invasion force -- veterans of the Moncada attack as well as other recruits from the United States, Cuba and various Latin American countries. after their initial organization in Mexico City, a ranch was leased outside the city in order to engage in more extensive, and private, maneuvers. While his force was training, Castro traveled extensively throughout the United States, attempting to raise financial and moral support for his cause from exiled Cubans and American sympathizers. By the time he returned to Cuba, he had established some 62 Cuban "patriotic clubs," raised approximately $50,000 in cash and received pledges for considerably more. 22/ In late 1955, Castro's movement absorbed the Accion Nacional Revolucionaria (ANR) led by Frank Isaac Pais. The ANR had been a small clandestine organization operating out of Oriente Province. Castro planned to use Pais and his followers to support his landing in Cuba, now planned for July 1956. In March 1956, Castro broke officially with the Ortodoxo Party, establishing the Movimiento 26 de Julio (26th of July Movement of M-26-7) as an independent revolutionary movement dedicated to the overthrow of Batista.* By then, M-26-7 had gained an initial, but firm foothold in Oriente Province, thanks to the momentum being built by Frank Pais. Castro wanted to coordinate his invasion with an island- wide revolt against Batista to gain maximum effect. In the summer of 1956, he met with Jose Antonio Echeverria and Ricardo Corpion, representatives of the Directorio *The title of the movement was derived from the date of the attack on the Moncada Fortress. Castro considered that attack the beginning and inspiration of his rebellion. Revolucionario (DR), a student organization also advocating Batista's ouster.* After listening to Castro's plans for the invasion of Cuba and overthrow of Batista, the DR representatives agreed to sign a pact with Castro and the M- 26-7, stipulating that the two groups would coordinate their future actions. This very important alliance, which became known as the Mexico Pact, succeeded in uniting for common purpose two of the most highly organized factions opposing Batista. Castro had never tried to conceal his activities in Mexico. Consequently, Batista was probably aware of the time and place of the planned invasion well ahead of time. The Mexican police and Cuban Intelligence agency (SIM) kept close tabs on Castro's activities at the ranch outside Mexico City; Mexican authorities also raided it several times, each time confiscating rather sizeable caches of arms. Castro's attempts to purchase a boat large enough to get his group to Cuba met with similar "success." In September 1956, he placed a down-payment of $5,000 on an ex- U.S. Navy crashboat; when Washington checked with the Cuban Embassy about the validity of the purchase, Batista's *M-26-7, ANR and DR were not the only revolutionary groups operating in Cuba at the time. Several other organizations such as the Organizacion Autentica (OA), Federation of University Students (FEU) and many others were active against the government: publishing underground newspapers, gathering arms, and engaging in sabotage and other terrorist activities. government interceded and convinced Washington to cancel the sale. By the end of the month, Castro found himself the leader of a highly trained and organized insurrectionist group, but without weapons or the means to get to Cuba. After considerable difficulty, Fidel managed to secure both a boat and arms for his men. The boat was the yacht Granma, designed to carry ten passengers. Castro intended to load her with 82 men plus their weapons, ammunition and food. Finally, on November 25, 1956, after a delay of several months, Fidel Castro and 81 other members of M-26-7 departed Mexico for Cuba. Sierra Maestra 23/ Fidel Returns. Castro had allowed six days for the trip. He was expected to arrive on November 30th which would coincide with a general, island-wide uprising led by Frank Pais and the M-26-7 Santiago group. However, the expedition encountered problems. Most of the 82 men were seasick and the Granma experienced severe engine problems caused by the overloading. As planned, Frank Pais and the M-26-7 went into action on November 30th. Pais had planned to stage a general show- of-force throughout Oriente province and in isolated locations across the island. Pais's plan called for coordinated attacks on Santiago's National Police headquarters and the maritime police station, while keeping the Moncada Fortress under bombardment with 81mm mortars. Castro and Pais assumed that the general population would join in the revolt as soon as the attacks began. Once the police stations had been captured, arms and ammunition would then be distributed to the population, and a full scale attack would be launched against Moncada. Pais began the operation with 86 armed men; he counted heavily upon surprise.* Unfortunately, he lost this element almost immediately when one of his men was captured on the way to man a mortar position. The police, thus alerted, barracaded themselves in their headquarters and fought off all attempts by the rebels to breach their positions. In addition, Batista wasted no time in flying in reinforcements. By nightfall, Pais could see the futility of his position. Unaware of Castro's plight, he canceled the attack and withdrew with his men, fading back into the civilian population. With the exception of a few isolated incidents elsewhere in Oriente province, the general revolt throughout the island did not occur. Lack of arms, poor organization and limited information on Castro's intentions and timetable were the major problems. However, the events of November 30th were not without successes. Some arms and ammuntion were captured and later turned over to Castro, manpower was preserved to fight another day, and the weaknesses in the *By November 1956, the Moncada garrison had been reinforced, and now totalled some 2,000 soldiers. M-26-7 organization were rather graphically displayed.* Castro heard of the Santigo uprising -- and its failure -- while still at sea. He had been unable to report his position and problems and thus delay the revolt, because he had only a radio receiver and no transmitter. Undaunted by the setback Santiago represented, he decided to forge ahead with the landing. The original plan had been for Fidel and his men to land near Niquero on November 30th (see Map #2). He was to join forces with approximately 100 men under Crescencio Perez, seize Niquero and Manzanillo, and then proceed (via Bayamo) to join Pais's group in Santiago.** Unfortunately, the Granma's problems and the failure of the Santiago attack placed the whole plan in jeopardy. On December 2nd, Castro's group finally came ashore near the town of Belic, several miles east and south of their intended landing. The yacht was so overloaded that it could not actually beach. Since there were no piers, the men were forced to unload her in water up to their chests. Their debarkation proved to be particularly difficult, not only because of the depth of the water, but because they landed in a swamp and were spotted almost immediately by *Only three members of M-26-7 were killed during the fighting. **Perez was a sort of bandit-patriarch of the Sierra Maestra. Convinced by Pais to join M-26-7, he later became one of Castro's most trusted lieutenants. alert sea and air patrols. 24/ Unable to make contact with Perez's group, under fire from the Cuban Air Force and pursued by the Cuban Army, Fidel decided to abandon his original plan and converge on the jungle-covered and precipitious Sierra Maestra Mountains, a bandit haven not under government control.* Thus began an arduous inland march with Batista's forces in trace. On December 5th, the army cornered Castro's troops and almost decimated them with artillery fire and air attacks. 25/ At that point, Fidel elected to split his force into smaller groups in the hope that they would be more likely to break through the encircling army. This maneuver was probably what saved some of them. Fidel's own group, which by now only numbered three men, was forced to hide for five days in a cane field without food or water. Other groups were not so lucky. Some were overrun completely; all who surrendered were executed. 26/ About this time, word was circulated by the Cuban government that Castro and his entire group had been killed. This popularly held belief, repeatedly reinforced by government propaganda, was not disproven until the New York Times published an interview between Castro and Herbert Matthews in February, 1957. 27/ *Perez's group had been in place, but Castro's delayed arrival plus heavy army patrols had caused him to withdraw. The Rebellion Begins. Shortly before Christmas, Castro and 11 survivors of the Granma expedition, including Raul Castro and "Che" Guevara, assembled at Pico Turquino, the highest mountain in the Sierra Maestra range. The outlook was not good. The almost total loss of their provisions and ammunition placed them at the mercy of the local inhabitants: However, as usual, Fidel's confidence was unshakeable. Upon reaching the mountains, he is reported to have asked a peasant: "Are we already in the Sierra Maestra?" When he heard that the answer was "yes," he concluded, "Then the revolution has triumphed." 28/ Castro might have been overly optimistic. For days he and his small group travelled continuously throughout the mountains, fearing capture by government forces, although the Cuban Army made no real attempt to find them. They slept on the ground and stayed alive by eating roots. Eventually, Fidel and his men located Crescencio Perez, the man they had planned to link-up with at Niquero. Perez helped the rebels obtain food from the peasants, and lent material support in the way of arms and ammunition. Castro's position soon improved to the point where he felt comfortable enough to launch his first attack against a government outpost. Attack on La Plata. On January 16, 1957, Castro and 17 of his followers attacked a small army outpost of 18 soldiers at la Plata. 29/ The tactics used were the same that would be repeated throughout the next 20 months, with essentially the same degree of success. A daylight reconnaissance was made of the objective, the activities of the soldiers were noted, and approach and retirement routes were plotted. Early the next morning, the surprise attack began. Seven soldiers were killed or wounded, and the position was seized. Precious weapons, ammunition, food and equipment were confiscated and taken back to the mountains, while Castro and his men, anticipating that the Army would attempt to pursue them, took up positions at a prepared ambush site. Later that morning, an army patrol stumbled into the ambush and was virtually annihilated. Incidents like this, coupled with Herbert Matthews' New York Times interview the following month, forced Batista to take Castro seriously; the Army committed more troops to Oriente Province, and a reward of 100,000 pesos was placed on Castro's head. However, Batista's response had little impact upon the rebels. Castro's alliance with the local population, fostered by his respectful treatment of them, gave him an intelligence network the Cuban Army found impossible to defeat. Castro was kept constantly aware of the army's intentions and position, while the government forces were continually misled and misinformed as to Castro's whereabouts. Palace Attack. Castro and the M-26-7 were not alone in opposing Batista. Several groups across the island, some aligned with Castro and others not, were in open rebellion against the government. One of these groups, the Directorio Revolucionario (DR), was composed of a group of students from the University of Havana. Allied with the M-26-7 through the Mexico Pact, the DR had been quite active in Havana for several months. On March 13, 1957, they staged an attack against the Presidential Palace in Havana using "fidelista" tactics. The attack took everyone by surprise and would have been successful in killing or capturing Batista except that, by chance, the President had left his first-floor office and gone to his second-floor apartment because of a headache. Twenty-five members of the DR were killed during the fight, and the whole operation was generally acknowledged to be the work of Castro. In what was quickly becoming one of his favorite tactics in response to rebel attacks, Batista ordered the arrest of all known rebels and rebel sympathizers in the Havana area. Those that were found were executed. While this counter-revolutionary technique was somewhat successful in eliminating unwanted opposition, it tended to alienate the population. In conducting these purges, the army and police were not usually discriminatory in their selection of targets. "Body count" frequently became more important than eliminating known rebels. Organization for Guerrilla Warfare. By mid-April, Castro had acquired more than 50 volunteers from Santiago and other parts of the island; he now formed the first of an eventual 24 "columns' ranging in size from 100 to 150 rebels. The majority of these volunteers, and those joining in the following months, were middle-class students, merchants or professionals being hunted by Batista's police. Nevertheless, Castro generally would not accept volunteers who arrived without arms; he simply could not afford to feed them. He would turn them away, promising to let them join his group if they came back armed. To obtain arms, would-be "fidelistas" looked for the opportunity to relieve Batista's soldiers of weapons and ammunition. Eventually, more guns came, sometimes from underground supporters, sometimes flown in from overseas, but most (about 85 percent) directly from the enemy itself. In the beginning, the rebels dared make forays against only the most isolated of the army outposts, and even then a rifle was so valuable that if a rebel abandoned one during a battle, he had to go back unarmed to retrieve it. As Castro used to tell his men, We are not fighting for prisoners. We are fighting for weapons.' 30/ Recruits who were allowed to stay were put through a lengthy period of political, physical and military training patterned after that conducted in Mexico. The training was purposely difficult; Castro wanted only the toughest and most dedicated to remain. Fidel kept his column constantly moving throughout the Sierra Maestra, seldom stopping for more than 24 hours. Even though the rebels ate but once a day, adequate food stores were a constant problem. A tiny tin of fruit cocktail was considered a great luxury. To maintain discipline, strict rules were enforced; food was never taken from a peasant without permission and payment, and a rebel officer was never to eat a larger portion than his men. A person could be shot if merely suspected of being an informer. Alcohol was forbidden, and sex was discouraged unless the couple consented to be married. Fidel shared the mountain hardhips with his followers, often out-distancing them in an effort to set the example. His ability to march for hours without stopping earned him the nickname "El Caballo" -- The Horse. Despite such spartan conditions, Castro's group continued to grow. Sleeping on the ground gave way to hammocks, and later, more permanent camps with "Bohios" (huts), kitchens and hospitals. 31/ As Fidel's stability in the Sierra maestra grew, so did his intelligence system. Warning networks were established using the well-treated farmers and mountain people as spies. Beards. At this point, it is worth noting the origin of the famous rebel beard. Initially, Castro and his followers grew beards for the very practical reason that they had no razors and little soap or fresh water for shaving. As the rebellion continued, however, the beards took on important meaning. In time, they became so conspicuous and so representative of Castro's movement that beards became the major distinguishing feature between rebels and ordinary citizens. Unless the bodies were bearded, photos of "rebels" killed by the army fooled no one. Beards became such a symbol of rebellion that a Batista soldier on leave who had allowed his beard to grow was machine-gunned and killed from a police car in the middle of Santiago, having been taken for a rebel. 32/ Later, as part of a planned general strike, Fidel intended to infiltrate the towns with members of his rebel force. Wishing to make a good impression on the population, he considered having these men shave off their beards. He was finally convinced otherwise by Enrique Meneses, a newspaper photographer, when Meneses pointed out that "Any photographs in existence anywhere in the world at the time would lose their news-value if the rebels shaved off their beards." 33/ Eventually, following Castro's victory, everyone, except Fidel and a few others, was ordered to shave. It seemed that some individuals, who had never even been near the Sierra Maestra, were wearing beards, attempting to capitalize on the implication. El Uvero. With reinforcements and added firepower, Castro began to expand his base of operations. On May 28th he led a band of 80 guerrillas against the military garrison of El Uvero (see Map #3). El Uvero, located on the seacoast at the foot of the Sierra Maestra, was isolated and manned by only 53 soldiers. The garrison presented an ideal target for Castro's limited forces and assets. Using the "fidelista" tactics described earlier, Castro's group took the outpost by surprise when they approached the garrison in the early morning hours. The fighting, though intensive, was over in about 20 minutes. The army regulars sustained 14 dead and 19 wounded, and Castro's forces lost six killed and nine wounded. The rebels then confiscated the garrison's arms, ammunition and supplies. The battle was Castro's first significant victory, proving that, given the right conditions, regular army forces would be soundly defeated. In Guevara's words: ... we now had the key to the secret of how to beat the enemy. This battle sealed the fate of every garrison located far enough from large concentrations of troops, and every small army post was soon dismantled. 34/ The psychological value of the victory cannot be overemphasized; it brought to fruition months of hardship and training, and, like an elixir, immeasureably bolstered dedication to the struggle. Batista Reacts. Faced with the increased irritation of incidents involving Castro, Batista decided to alter his strategy of ignoring Castro to one of containment. After El Uvero, the army abandoned its forays into the Sierra Maestra, gradually withdrawing from isolated outposts that were not vital. Castro's small force was left free to roam the mountains, but was kept from operating in the open plains. The Army declared, and attempted to enforce, a "deadzone" between the mountains and the plains to prevent Castro's forces from venturing beyond the mountains or communicating with urban organizations. Castro, in turn, carefully avoided openly meeting government forces. Meanwhile, the Cuban Air Force carried out a program of saturation bombing on suspected guerrilla strongholds. Following the El Uvero attack, government censorship was again imposed, and the Presidential elections that had been scheduled for November 1, 1957, were postponed until June 1, 1958. Coincidently, Batista's counter-terrorist measures against the civilian population were increased, especially in Oriente Province. Generally, these amounted to the loss of all civil liberties and the institution of martial law. Illegal searches and seizures, torture and outright murder became commonplace. 35/ Batista's soldiers would stop at nothing to present the impression that they were in control of the situation. Frequently, when frustrated by their inability to gain information or capture rebels, soldiers would summarily execute civilians, claiming that they were either guerrillas or rebel sympathizers. The Sierra Maestra Manifesto. Despite calls for negotiations between the government and the rebels by the Institutions Civicas (IC), a loose federation of civic and professional associations, President Batista was unrelenting. Castro, on the other hand, joined by Raul Chibas and Felipe Pazos,* responded to the IC by issuing a proclamation which they called the "Sierra Maestra Manifesto." Although not detailed, it became one of the basic rhetorical documents for the M-26-7 Movement. *Key members of the Resistencia Civica, another revolutionary organization aligned with the M-26-7. Although it was drafted on July 12, 1957, it was not seen in print until it was published in the Bohemia newspaper in Havana on July 28th. In addition to rejecting Batista's election plan, the Manifesto called for: 1. A Civic Revolutionary Front with a common strategy for the struggle. 2. A provisional government, headed by a neutral leader selected by the civic association. 3. Free elections within one year of the establishment of the provisional government. 4. Reforms in the areas of political freedom, civil service, civil and individual rights, agriculture, labor unions and industry. 5. An end to arms shipments from the United States to Batista. 6. Depoliticalization of the armed forces and abolition of military juntas. 36/ The "Sierra Maestra Manifesto" was doubly important because it set a more neutral tone on the subject of reforms than some of Castro's earlier bellicose statements released through newspapermen such as Herbert Matthews, and gave political substance to the revolution by outlining specific organizational structure and goals for the proposed government. Death of Frank Pais. While Castro was busy conducting rural guerrilla warfare from the mountains, Frank Pais was active in establishing urban organizations throughout the island. With the death of Jose Antonio Echeverria during the unsuccessful Palace Attack, Pais was left alone to carry on the M-26-7 movement in the cities and towns. By the summer of 1957, Pais' tremendous organizational skills had begun to show progress throughout Oriente, even down to remote village levels. While Castro's activities seemed confined to the areas immediately surrounding the Sierra Maestra, Pais was spreading his influence far and wide. As his reputation as a M-26-7 organizer grew, Pais came under increasing pressure from Batista's forces in Santiago, his base of operations. Finally, in July 1957, an all-out manhunt for his capture was launched by the Santiago police. As the net tightened, Pais knew he would have to leave the city. Since exile was out of the question, he chose to join Castro in the mountains. As he prepared to leave, the house in which he was hiding he was surreptitiously surrounded by police. As Pais walked out of the house, he was gunned down. The death of Frank Pais marked a turning point in the internal organization of the M-26-7. With no firm leadership evident elsewhere, the heart of the movement gradually centered in the rural campaign being waged from the Sierra Maestra. For the next several months, the urban arm of the M-26-7 was assigned one specific role: to support and sustain Fidel Castro's guerrillas. The deaths of Echeverria and Pais had eliminated two of the three genuine leaders of the Cuban insurrection; only Castro remained. Equally important, two potential challenges to Castro's post-revolutionary leadership were eliminated. The Cienfuegos Uprising. On September 5, 1957, the most serious and ambitious attack to date against Batista's regime was launched by a group of young naval officers in the coastal town of Cienfuegos in Las Villas province. The navy was traditionally not as pro-Batista as the army. A large number of naval officers were frustrated by Batista's propensity to appoint men who had not graduated from the Mariel Naval Academy to the highest ranks of the naval service. Rear Admiral Rodriquez Calderon, Chief of the Cuban Navy, was such a man. He was thoroughly despised by young naval officers. The uprising was to be a coordinated effort among youthful elements of the Cuban Navy stationed at Cayo Loco Naval Base in Cienfuegos and M-26-7 activists positioned in Havana and Santiago. Originally scheduled for May 28th, the plan called for an island-wide revolt, to consist of two phases. In the first phase, a navy frigate would shell the Presidential Palace in Havana, while, simultaneously, navy pilots would bomb Camp Columbia. M-26-7 urban cadres would then capture the Havana radio stations and call for a general strike. The second phase would center around uprisings at all navy bases beginning with Cayo Loco. When the May 28th plan was compromised by an informer, the revolt was rescheduled for September 5th to coincide with the Army's celebration of the 1933 "Sergeant's Revolt." On September 5th the plan was again postponed because of another breach of security, but for some reason -- either poor communications or stubborn determination -- the second phase of the Cienfuegos operation went into effect at the appointed hour. Although the rebels initially succeeded, Batista responded with brutal force; aircraft, tanks and troops were rushed to Cienfuegos to crush the uprising. The ensuing battle left more than 300 dead. 37/ A handful of the rebels escaped to the Escambray Mountains where they continued to wage war against the government. It is unclear exactly what the naval officers associated with the Cienfuegos uprising hoped to gain. Evidence indicates that they were highly influenced by elements of the M-26-7 who had infilitrated their ranks. Apparently, the M-26-7 viewed these disgruntled naval officers as a good source of revolutionary fervor and planned to use their dissatisfaction with Batista as a foundation for an island-wide revolt. The naval officers themselves had no stated goals other than to return control of the navy to graduates of the Cuban Naval Academy. Cienfuegos was certainly a military victory for Batista, but not a political one. For the first time, members of Cuba's Armed Forces had united against him. Never again could he depend upon their unified support, the bedrock of his regime. Officers of all three branches had been implicated; not even the Cuban Army had remained faithful. Perhaps even more significant, not since Batista's own revolt in 1933, had military and civilians united to oppose a Cuban President. Pact of Miami. Members of the major revolutionary factions opposing Batista met in Miami during December 1957, to attempt to unify their efforts. This particular meeting was significant because it was composed of representatives of virtually all of the anti-Batista organizations operating within Cuba or in exile. After some debate, a number of members of the revolutionary groups, including representatives of the M-26-7, signed a pact signifying solidarity and co-equal status in the struggle against Batista. Provisions of the pact called for the creation of a provisional government and closely resembled those of the earlier "Sierra Maestra Manifesto." When Castro heard of the pact, he was enragad.* In a letter to the conference he wrote: "... while leaders of other organizations are living abroad carrying on imaginary revolutions, the leaders of the M-26-7 are in Cuba, making a real revolution." 38/ What really upset Fidel was that the pact left the M-26-7 on equal footing with the other organizations, although the M-26-7 and the DR had carried the full load of the insurrection. While the DR was *Castro's representatives had signed the pact, apparently misunderstanding their leader's position, a phenomenon not uncommon at the time because Castro frequently shifted or "expanded" his ideology to suit the occasion. apparently ready to grant equal status to the other groups, Fidel was not. Castro denounced the pact and reinterated his position on Cuba's future as offered by the "Sierra Maestra Manifesto." In addition, he offered his own candidate, Manuel Urrutia Lleo, for the post of Provisional President. Castro's rejection of the Pact of Miami had major repercussions. First, Fidel demonstrated within his own movement that he would not be the pawn of politicians. Second, he established the M-26-7 as a clearly independent movement, never again to be confused with other organizations. Third, he demonstrated his preeminence among the other opposition groups; and finally, it portended that any future attempts at unity would be fruitless without prior consultation with Castro. The Second Front. The end of 1957, and the early part of 1958, saw a sort of unofficial cease-fire. Both sides used the period to consolidate their positions and prepare for future operations. Batista increased his forces and prepared them for mountain warfare. This was done primarily in response to Castro's ill-advised scheme to disrupt the island's single-crop economy. Late in the fall of 1957, just as the sugar crops were to be harvested, Castro's followers began burning the cane fields hoping to bring economic disparity to the government. Understandably, farmers and local merchants -- many of whom were ardent supporters of Castro -- began to complain to the rebel leader. Belatedly realizing that the harvest was the major source of livelihood for his supporters as well as the government, Castro rescinded his order, thus preserving his popular support. Enough of the sabotage was carried-out, however, to gain Batista's attention. Recognizing the potential seriousness of that kind of action if it were to be repeated on a large scale, the President resolved to end Castro's rebellion the following year. This decision proved to be the beginning of Batista's downfall. Castro, meanwhile, was engaging in a program to paralyze the rail and road networks near the Sierra Maestra. Gradually he was expanding his control beyond the mountains to other parts of Oriente province. By the beginning of 1958, no vehicles, trains or military patrols could move at night in the Manzanillo-Bayamo area without being ambushed. On January 25, 1958, President Batista restored constitutional guarantees everywhere on the island except Oriente province. Under increasing pressure from the United States, he called for free elections, promising to turn over the government to his elected successor. However, he retained the right to control the armed forces. Earl Smith, the U.S. Ambassador to Cuba at the time, reported that Castro indicated, at least unofficially, his willingness to accept general lections provided that Batista would withdraw his troops (without their equipment) from Oriente province. 39/* The Papal Nuncio of Cuba, representing the Catholic Church, even attempted to bring the two sides together.** While Batista professed interest, Castro rejected these overtures, saying that the committee appointed by the church was pro-Batista and therefore not acting for the benefit of the Cuban people. 40/ In early March, Raul Castro led a small column out of the Sierra Maestra northeastward toward the Sierra Cristal Mountains with the goal of opening a second front in Oriente province. On March 12th, Raul established the Second Front "Frank Pais," and Fidel issued a 21-point manifesto announcing its opening and declaring that total war would begin against Batista on April 1, 1958. 41/ Batista responded by airlifting more government troops into Oriente to reinforce the 5,000 already there. In addition, constitutional guarantees were again suspended throughout the island, and elections were postponed from June 1st to November 3rd. On March 14th, the U.S. government announced its intention to cease the shipment of arms to Cuba. 42/ *The sincerity of Castro's overture is suspect since it violates his "Sierra Maestra Manifesto." Evidence indicates that he planned to use Ambassador Smith for leverage in an attempt to buy time and maneuvering room. **The Papal Nuncio more likely represented the Church hierarchy and wealthy patrons only, since most young cubans advocated the overthrow of Batista, although not necessarily in accordance with Castro's plan. Castro's forces made important advances in the early spring of 1958, prior to April 9th. Besides Raul's second front, there were four other separate guerrilla forces at work in Oriente, keeping the whole province in an almost constant state of turmoil. Uprisings were also reported in Camaguey and Pinar del Rio provinces and the Escambray Mountains. 43/ Total War 44/ General Strike: April 9th, 1958. As promised in his March 12th manifesto, Castro called for an island-wide general strike to commence on April 9th, 1958. As originally conceived, the strike was to bring the country to a standstill; however, contrary to statements claiming otherwise, the M-26-7 did not yet have the level of urban revolutionary organization, leadership and popular support necessary to make it successful. Batista ordered his 7,000 man National Police force to brutalize strikers wherever they were found; furthermore, the head of Cuba's labor unions promised that anyone who struck would lose their job forever. Needless to say, the strike was a dismal failure and acute embarrassment for Castro. At this stage of the revolt the majority of the Cuban people simply did not have the confidence to risk their livelihoods, and perhaps lives, for Fidel Castro's dreams. His revolutionary platform was neither well known nor understood. Not surprisingly, after April 9th, Castro placed increased emphasis on the military solution as the principal means of removing Batista from power. Less stringent measures virtually disappeared from his strategy. Batista's Summer Offensive. Meanwhile, interpreting the strike's failure as a sign of Castro's vulnerability, Batista surged forward with his plan to mount a summer offensive against the rebel's forces in the Sierra Maestra. General Eulogio Cantillo was appointed to head the campaign, and in early May he presented his plan to Batista and the general staff. Basically, Cantillo's strategy called for a 24 battalion attack against Castro's stronghold in the Sierra Maestra. He planned to establish a blockade around the mountains to isolate the guerrillas from potential supplies, arms and men. Once the blockade was in place, Cantillo envisioned that the army would attack the rebels from the north and northeast with 14 battalions while holding 10 in reserve. Faced with such overwhelming odds, Castro would have no choice but to withdraw to the west into the plains north of Santiago, or risk being driven into the sea. Cantillo reasoned that if Castro's forces could be forced into open terrain, they could be easily eliminated. Batista approved Cantillo's basic plan, but feared that such large numbers committed to one operation would leave other areas of the country dangerously exposed. Instead, Cantillo was given 14 battalions (roughly 12,000 men), of whom approximately 7,000 were new peasant recruits who had responded to Batista's recent recruiting drive. The latter were poorly trained, and generally unreliable.* Cantillo was dissatisfied with the number and quality of troops he had been given for the offensive, and argued strenuously with Batista for more forces. The President remained firm, however, claiming that he could not afford to shift troops who were guarding private farms and sugar mills. Still disgruntled and now pessimistic, General Cantillo nevertheless proceeded with his plans. ** The government's offensive was still in the planning stages when Batista made his first error. In early May the President had installed General Cantillo to replace General Alberto del Rio Chaviano as the head of army forces in Oriente province. Chaviano had frequently shown himself to be incompetent in trying to deal with the rebels, and Batista did not trust him. However, Chaviano had a strong ally in his father-in-law, General Francisco Tabernilla, Sr., the Chief of Staff of the Cuban Army. Tabernilla convinced Batista to reappoint Chaviano to Oriente province. The President acquiesced and ordered the province to be *The vast majority of these peasants had joined purely for economic reasons. The Army offered steady employment while farming did not. **Cantillo's concern stemmed primarily from his belief that Castro's forces numbered between 1,000 and 2,000 veteran guerrilla fighters. The number was actually much closer to 300. split; Chaviano was given command of the eastern sector with the Central Highway as the dividing line between the two generals' spheres of influence. Cantillo was furious because the reappointment represented a political rather than a military decision. Cantillo's fears were not unfounded a Chaviano was in charge of the sector in which Raul Castro's guerrillas operated, but made no attempt to engage them. Worse yet, he frequently interfered with, or failed to support, Cantillo's efforts in the western sector. Finally, Chaviano undermined Cantillo's campaign by frequently complaining to Tabernilla that General Cantillo's ineffectiveness was causing the government to lose control of Oriente province. As a result, Tabernilla was often slow to extend much needed logistical support to Cantillo's forces, assuming that the supplies and ammunition would only be wasted. All of this military intrigue and infighting only served to highlight Batista's inability to conduct an effective military operation. For years the President had played one officer against another, until none of them were capable of leading any serious military operation. Since enlisted troops and junior officers were aware of this impotency among their commanders, discipline and morale was at an ebb before the summer offensive began. By the middle of June 1958, General Cantillo was completing his plans. Rather than blocking with his poorer troops and using his better units to drive Castro's guerrillas onto the plains, Cantillo's tactics amounted to a series of piecemeal attacks.* Castro's strategy was by now standard; bleed and exhaust the enemy until the time as ripe for counterattack. He relied heavily upon minefields and ambushes to protect his flanks. His main tactic was to allow the army to move forward, extending its lines, then hit the advanced guard and fall back. The maneuver was to be repeated as many times as possible. In the event the army penetrated deep enough to threaten the guerrilla's base- camp, Castro's forces prepared an extensive trench and bunker network designed to hold the enemy back from the vital areas. If necessary, this network would be manned by Guevara's column, allowing Castro to freely move the remaining columns along interior lines to the weakest point, counterattacking when the opportunity arose. However, this close-in defense was never necessary. Cantillo launched his initial attack with two battalions moving out from the Estrada Palma Sugar Mill at the base of Sierra Maestra on June 28, 1958 (see May #4). The force relied upon a single road as its axis-of-advance. Flank security was poor. Less than four miles from the mill, forces under the command of "Che" Guevara attacked the vanguard battalion. Thrown into disarray, the battalion stopped while armored cars were brought up to clear the battalion's flanks. As the armored cars deployed, they ran *It's hard to say whether there tactics grew from ineptness or caution. Considering Cantillo's reverence for his enemy, it was probably the latter. directly into minefields the rebels had placed on either side of the road. Several of the cars were destroyed. As the Cuban soldiers panicked and attempted to retreat, Guevara's sharpshooter's opened fire and killed several. The situation totally degenerated when the second battalion failed to come to the relief of the first. As both battalions began to withdraw, the guerrillas moved forward, covering the flanks of the retreating column. The sharpshooters now caused heavy casualties on the routed soldiers. In total, the regular army suffered 86 casualties compared to only three for the rebels. Guevara's forces captured some 65 weapons and 18,000 rounds of ammunition. General Cantillo's plan to force Castro's rebels onto the western plains was not working. Castro's position was too strong to be taken with a single point thrust, so Cantillo devised a daring plan that featured an amphibious landing at La Plata, a coastal town south of Castro's Turquino Peak base camp. Cantillo envisioned a pincer movement with a single battalion landing at La Plata, a two company landing a little further to the west and a simultaneous assault by another battalion from the north and east. If the plan worked, Castro's base on the western slope of Turquino Peak would be surrounded on three sides. The guerrillas would be forced to stand and fight against overwhelming odds, or withdraw to the plains where they were especially vulnerable. On July 11, 1958, Battalion No. 18, commanded by Major Jose Quevedo Perez, a former student colleague of Castro's at Havana University, landed about ten kilometers southeast of Turquino Peak at the mouth of the La Plata River. Quevedo's troops, most of whom had never experienced combat, moved cautiously inland, expecting an ambush at any minute. Alerted by his intelligence network that the landing had occurred, Castro did not disappoint them. In the classic fashion of the Cuban Army, Quevedo's soldiers blundered into Castro's ambush. Working rapidly and moving constantly, the rebels fragmented and then surrounded the battalion in a matter of minutes. Observing the battle from a helicopter, General Cantillo decided that while Castro was busy beseiging Battalion No. 18, he might be vulnerable to a flanking/ surprise attack. Consequently, Cantillo ordered the planned second landing of two companies to the west of La Plata. Again, Castro's intelligence paid-off; he had been warned to expect this tactic. In response, the guerrilla leader had emplaced two .50 caliber machine-gun sites overlooking the beach intended for the second landing. The vicious grazing- fire that these positions produced forced the lightly-armed landing-barges to turn back. Cantillo ultimately had to land the two companies behind Battalion No. 18 at La Plata. His amphibious plan in obvious jeopardy, General Cantillo shifted his emphasis to Battalion No. 17 which was attempting to bring pressure on Castro's position from the north and east. Meanwhile, upon learning that the leader of the army forces was his former classmate, Castro repeatedly called upon Quevedo to surrender and join the revolution. Each time Quevedo declined, and the fighting continued. Quevedo believed that reinforcements would eventually arrive and simply would not capitulate even though his position was increasingly untenable. What Quevedo did not know was that Battalion No. 17 had met determined fighting against Guevara's column, and had withdrawn. General Cantillo, acknowledging that the operation was another failure, now looked for another strategy. Disheartened and exhausted, Quevedo finally surrendered his command on July 21st. In all, his force had suffered 41 killed and 30 wounded. Castro's rebels had but three deaths, yet managed to capture 241 prisoners, 249 assorted weapons including bazookas, machine guns and mortars and 31,000 rounds of ammunition. By the end of July, Cantillo's confidence in the Army's ability to defeat Castro was rapidly waning. In a confidential report to Batista, he described the rebels in superhuman terms: ... (they) can tolerate staying for days at the same place, without moving, eating or drinking water. Furthermore, he still believed he was facing a force of between 1,000-2,000 rebels.* *Cantillo's overestimation can partially be contributed to faulty intelligence. However, the primary reason stems from the fact that he refused to believe that a force of only 300 men could be so effective. Cantillo's assessment of his own troops in the same report was far different. He cited low morale and discipline, plus a lack of weapons. One of the main problems effecting morale was the troops': ... awareness that there is no strong penalty against those who surrender or betray their unit, and that falling prisoner to the enemy ends all their problems, has sapped the will to fight through the ranks ... . The number of self- inflicted wounds is extraordinarily high. It is necessary to punish troops refusing to advance and to occupy their positions. 45/ A review of the record reveals that Cantillo's forces suffered considerably more than low morale and weapons' shortages. His forces lacked tactical knowledge in military operations in general, and counterguerrilla operations in particular. In addition, lack of command unity above the battalion level (except for Cantillo himself), and the refusal of many of his officers to fight, contributed to the generally poor performance of his units. With these problems weighing heavily upon his mind, Cantillo decided to make one more attempt to defeat Castro. The General's new strategy was based on a venture designed to capitalize on the tactical situation remaining from his last plan. Battalion No. 17 was still stranded in the mountains following their abortive attempt to relieve Quevedo's battalion. Cantillo planned to trick Castro into pursuing Battalion No. 17 as it withdrew, pulling the rebel leader into an ambush by making him think that the regular army was in full retreat. Cantillo's plan consisted of developing a triangular perimeter around the town of Las Mercedes, located to the north of the Sierra Maestra. To preclude any chance of escape should the rebels take the bait, Cantillo also stationed several companies on the flanks of the retreating battalion. The General hoped that the rebels would pursue Battalion No. 17 in its retreat from the mountains until it became impossible for them to escape the army's encirclement. Cantillo's plan depended upon Castro's probable ambition to defeat a second battalion within a one month period. He correctly guessed that Fidel would want to take advantage of his newly acquired firepower and the apparent demoralization of the retreating troops. Cantillo read Castro perfectly. Overly anxious to score a major offensive victory and sustain the momentum of his insurrection, Fidel was ripe for this kind of ruse. It played not only to his sense of drama, but his ego. Las Mercedes. The Battle of Las Mercedes began on July 29, 1958. Just as General Cantillo had hoped, the opportunity to defeat another army battalion was too much for Castro to ignore. As Battalion No. 17 began to retreat, Fidel ordered the complete mobilization of his Sierra Maestra columns. With uncharacteristic abandon, the rebel leader plunged his forces headlong into Cantillo's trap. On the first day of the battle, about half of Castro's forces positioned themselves along Battalion No. 17's withdrawal route, while the rest of the rebels kept pressure on the battalion's rear-guard. In classic "fidelista" style, the rebels opened up on the battalion's advance-guard as soon as it entered the ambush site. The rebels quickly dispatched 32 soldiers before realizing that they themselves were in an ambush, and Battalion No. 17's advance unit had been the bait. As regular army forces began to close on the ambush site, Major Rene ("Daniel") Ramos Latour, commander of the guerrilla forces now engaged, attempted to withdraw his column while calling to Castro for reinforcements. Fidel responded by moving to the aid of his beleagured column, only to move within the encirclement himself. Seizing upon the situation, General Cantillo moved to take the unique opportunity of engaging the guerrillas on the plains by ordering three battalions from the Estrada Palma post into position against the rebels. In addition, the General further increased his forces by committing another 1,500 troops from Bayamo and Manzanillo garrisons. Toward the end of the day, Castro finally realized his precarious position. He sent word to "Che" Guevara, describing his serious situation. Guevara, probably the best of the guerrilla leaders from a tactical viewpoint, had the ability to see the whole battlefield in any given encounter. After receiving Castro's report, he quickly deduced Cantillo's plan. "Che" realized that Castro could be saved from disaster only if Cantillo's reinforcements could be delayed. Without hesitation, Guevara and his forces attacked the reinforcing column as they moved into position near Cubanacao, inflicting serious casualties and capturing some 50 prisoners. This action caused a brief impasse in the fighting, during which Castro was able to withdraw some of his troops and consolidate the rest into better defensive positions. July 31st, despite Guevara's brilliant action, found Castro still entrapped. By now, the guerrilla forces had suffered some 70 rebels killed. The price of Fidel's vainglory had become quite high. Still, General Cantillo did not press his advantage. He as yet believed that Castro's forces numbered much higher than was actually the case. In addition, his great respect for the guerrilla fighter's tenacity made him naturally cautious. He seemed to be waiting until he was absolutely sure of victory before he proceeded with the action. Early on August 1st, Castro sent a messenger to General Cantillo asking for a ceasefire and negotiations. Castro, the politician, would try to salvage the situation that Fidel, the guerrilla leader, had caused. Cantillo agreed and sent forth negotiators. In a letter to Cantillo on a page from his personal notebook, Castro wrote: "It is necessary to open a dialogue so that we can put an end to the conflict." 46/ Upon receipt of the letter and after consultation with his advisors, Cantillo decided that the letter was important enough to warrant Batista's attention. Batista was puzzled as to Castro's intentions. The President was convinced that, despite his losses at Las Mercedes, Castro had the ability to carry on the guerrilla war almost indefinitely. Batista sensed that Castro was only delaying, but on Cantillo's insistence, he decided to appoint a government negotiator and personal representative to return with Cantillo to talk to Castro. Batista's analysis of Castro's scheme was accurate, although neither the President nor Cantillo guessed the extent of Castro's peril. Fidel kept the discussions going until August 8th, by which time he had managed to remove his forces from Cantillo's grasp. After the negotiations failed, Cantillo and Batista found that they had no one left with whom to resume the battle. The impact of this debacle upon the morale of the Cuban army was devastating. The majority of the junior officers who had fought so hard over the preceding weeks were disgusted that Cantillo had even stooped to negotiate. Moreover, Castro's masterful maneuver had come at just the juncture when the regular army, after having fought well for the first time in the campaign, seemed to have all the advantages. In later years Castro frequently claimed Las Mercedes as a military victory for the M-26-7. While the results may be viewed as a political success, the fact is that Las Mercedes almost resulted in a disaster for his movement and in Fidel's capture by the government forces. Ultimately, Las Mercedes was particularly significant in two respects: it marked the final phase of Batista's unsuccessful summer offensive, and established General Cantillo as a point of contact between Fidel Castro and Fulgencio Batista. While the former result faded in comparison to the events that followed, the latter was to be a portentous occurrence. The Last Campaign 47/. As Batista's summer offensive ended, the regular army forces withdrew to their major garrisons, allowing Castro to commence his own offensive. On August 21st, Fidel summoned two of his most respected lieutenants, "Che" Guevara and Camilo Cienfuegos. In their presence he signed the general order that ultimately sealed Batista's fate. Guevara and Cienfuegos were to depart the Sierra Maestra between August 24th and August 30th, each at the head of his own column.* He assigned them one primary mission: to march to Las Villas province, more than 600 kilometers to the east: Once there, Guevara was to organize the rebel groups in the Escambray Mountains under the M-26-7 Movement and begin Third Front operations against the government in accordance with Castro's plan to cut the island in half Cienfuegos, meanwhile, would organize M-26-7 elements in northern Las Villas, and then press-on with his own column to open a fourth front in the mountains of Cuba's eastern most province, Pinar del Rio. In order to reach *The Second Column, under Cienfuegos, numbered about 60 and the Eight Column, under Guevara, about 150. these eastern provinces, however, the guerrilla columns had to transit Camaguey province. The M-26-7 movement was essentially nonexistent in Camaguey province; crossing was therefore a problem. The open, flat terrain and limited vegetation the province offered was not conducive to the brand of rural mountainous operations Castro's guerrillas were accustomed to waging. Further, since the majority of the island's agricultural effort was centered in Camaguey, the province was relatively prosperous, and the populus was generally unsupportive of Castro's aims. To counter the obvious treat of crossing this unfamiliar and unfriendly terrain, arrangements were made to bribe the army commander of Camaguey to guarantee the safe passage of the two columns. Unfortunately, the commander's defection was discovered, and the columns encountered disjointed, but often heavy, resistance from Batista's air and ground forces. Almost immediately upon entering Camaguey province, Guevara and Cienfuegos were forced to separate their columns; they were never able to reunite until after they reached Las Villas. Once the rebels were detected, the army mobilized its forces and carefully laid a series of ambushes and blockades across the province. Incensed that the guerrillas would be so bold, the army's byword became: They shall not pass! We shall serve the corpses of their chiefs on a silver platter, because they have had the audacity to think that they can conduct a military parade throughout Camaguey. 48/ Largely because of the attitude suggested in this quote, the progress of Guevara's and Cienfuegos's columns was extremely slow. Often they would have to wait for days, without food or water, before the way was clear to move. As always, they were forced to traverse the most impassable terrain possible, to avoid the army's roadlocks and ambushes. The author of the above quote, Colonel Suarez Suquet of the Camaguey Rural Guards Regiment, took it upon his command to zero in on Guevara's column. However, "Che" had years of experience in this type of movement behind him, and managed to frustrate every ambush and blockade effort that Suarez devised. By the first part of October, Guevara's column had woven its way through Camaguey, avoiding major confrontation with government forces. On October 12th, Guevara led his force across the Jatibonico River into Las Villas province after being surreptitiously escorted through Suarez's final blockade by an informer. By October 15th, "Che" was installed in the Escambray Mountains. Guevara's tactical skill and patience had again proved successful. Camilo Cienfuegos arrived in northern Las Villas province a few days before Guevara was established in the Escambray. While government forces were busy trying to capture "Che," the Second Column had managed to slip through relatively unscathed. On October 14th, Castro wrote to Cienfuegos saying: There are no words with which to express the joy, the pride and the admiration that I feel for you and your men. What you have done is enough to win you a place in the history of Cuba and of great military exploits. Don't continue your advance until you get further orders. Wait for Che in Las Villas and stay with him. The politico-revolutionary situation there is complicated and it is essential for you to remain in the province long enough to help stabilize it solidly. 49/ Cienfuegos was not to venture westward until the rebels had been able to recover physically, and until the conflict intensified in the areas already under Castro's influence. While Guevara and Cienfuegos were moving to establish the Third Front in Las Villas, the Castro brothers were solidifying their control over Oriente. By mid-October their forces, now numbering about 2,000, were operating freely throughout the province. Castro's strategy for the next weeks centered on the capital cities of Oriente and Las Villas provinces: Santiago and Santa Clara, respectively. His plan called for the Third Front to capture Santa Clara, thus severing the western half of the island from Havana, and leaving the way open for Fidel and Raul to capture Santiago and its military garrison at Moncada. Using the arms that would be captured in these operations, the rebels could then move on Camaguey. Once the western half of the island was secured, Castro planned to proceed with his plans to establish the Fourth Front in Pinar del Rio province. Santa Clara. The conquest of Santa Clara was left to the combined forces of Guevara and Cienfuegos. Together, their columns had swollen to about 1,000 guerrilla fighters by the first part of December. "Che" was given overall command for the approaching battle. Santa Clara, geographically in the center of Las Villas province, is surrounded by four strategically located towns that form a kind of man-made barrier around the provincial capital. Guevara's plan was to attack all four towns simultaneously. Cienfuegos and his guerrillas were to operate north of the city while Guevara's forces attacked from the south. To preclude the possibility of reinforcements, Guevara also planned to blockade major resupply routes to the east (from Havana) and west (from Camaguey). Finally, Guevara planned the capture of the orts of Caibarien, to the north, and Cienfuegos, to the south, to complete Santa Clara's isolation. With the isolation of the capital and capture of the four towns surrounding Santa Clara, including their garrisons, the rebels then would attempt to capture the city. The battle for Santa Clara began on December 14th when Guevara's columns attacked the town of Fomento, southeast of Santa Clara. The Fomento garrison capitulated, without serious resistance, on the 17th. Leaving a small rear guard, the rebels pressed on to the town of Remedios the next day. To the north, Camilo Cienfuegos advanced with little opposition until he reached the town of Yaguajay. Yaguajay's garrison was defended by a relatively small group of regulars (250) under the command of Captain Abon Ly, a Cuban of Chinese ancestry. Convinced that reinforcements would be sent from Santa Clara, Ly put up a determined defense of his post. Repeatedly, the guerrillas attempted o overpower Ly and his men, but each time they failed. By December 26th, Cienfuegos had become quite frustrated; it seemed that Ly could not be overpowered, nor could he be convinced to surrender. In desparation, Cienfuegos began to use a homemade "tank" against Ly's position. The "tank" was actually a large tractor encased in iron plates with a .50 caliber machine gun mounted on top. It, too, proved unsuccessful. Finally, on December 30th, Ly, out of ammunition, surrendered his garrison. Cienfuegos, one of the most gallant of the rebel officers, allowed Captain Ly to retain his weapon and accepted his honorable surrender. On December 27th, following the uncontested surrender of the port cities of Caibarien and Cienfuegos, Guevara met with his officers to study the plan for the final attack on Santa Clara. On December 30th, with Cienfuegos's success at Yaguajay, the way was now open to the capital. The early morning hours of December 31st found Guevara's combined forces converging from all directions on the city. By mid- afternoon the battle was over. Having little heart for combat, most of the city's 6,500 regular troops and police surrendered without a fight. Meanwhile, in Oriente province, Fidel Castro and his rebel army continued their general offensive toward the Santiago. From December 23rd to December 26th the offensive in Oriente had cost the rebels 26 dead and over 50 wounded, but the Army had sustained over 600 casualties. 50/ On December 30th, the town of Maffo fell to Fidel after 20 days of seige. The way was now clear to the capital of Oriente, and the battle for Santiago could begin. Batista's Departure 51/ By the end of December, 1958, Castro's forces controlled virtually all of Las Villas and Oriente provinces, and Camaguey province from its geographical center westward to Oriente. In Havana, events of the last days of December were beginning to affect the morale of Batista and his high ranking officers. The Chief of Staff of the Cuban Armed Forces, General Tabernilla, Sr. was actively pursuing a plan to remove Batista and install a civilian or military junta in his place. Tabernilla approached U.S. ambassador Earl Smith asking for American support for the junta, but Smith replied that he could only discuss such a solution with Batista himself.* Tabernilla next turned to other members of the general staff. After consultation, they decided that General Cantillo should once again negotiate a settlement with Castro based upon Tabernilla's plan for a junta to succeed the President. *In point of fact, Washington had been trying for some time to remove Batista from power, while preventing Castro from taking over. Cantillo flew to Oriente to meet with Castro on December 28th and explain the Chief of Staff's proposal. Castro rejected Cantillo's overture out-of-hand because it included Batista's escape. Castro wanted the President arrested and brought to trial for crimes against the Cuban people.* Castro also opposed the junta, preferring (he said) a return to constitutional guarantees and democracy. As a counter-proposal, Castro suggested that he and Cantillo join their forces and carry out a joint operation against Batista starting with the capture of Santiago and sweeping eastward across the island to Havana. Under Castro's plan, the army would support the insurrection unconditionally, back the president appointed by the revolutionary organizations and accept whatever decisions were made as to the military's future. 52/ Cantillo would not promise outright support for Castro, but closed the meeting saying that he would return to Havana and consider the proposal. He promised he would send word to Castro prior to December 31st, Fidel's deadline for the attack on Santiago. Upon his return to Havana, Cantillo was summoned to the Presidential Palace by Batista. The President chastised his Chief of Operations for negotiating with Castro without his approval. Cantillo explained that he was under orders from Tabernilla and thought that Batista had approved his *Castro did not consider Batista's coup d'etat to be a legitimate revolution as had been ruled by the Court of Constitutional Guarantees in 1952. He instead believed that Batista had violated Cuban law and should be punished. mission. Calling Tabernilla a traitor, Batista asked for Cantillo's support until the President could devise a plan himself. Cantillo agreed. Late on December 31st, after the word of Santa Clara's fall had reached the capital, Cantillo met with Batista again. The President explained his plan of succession to the General. He said that he would be leaving in a few hours, and that Cantillo should assume control of the armed forces. In addition, Batista proposed that a civilian junta be organized with individuals not involved with the government, and that the senior member of the Supreme Court assume the presidency in accordance with Article 149 of the Cuban Constitution. Unhesitatingly, Cantillo agreed to follow the President's plan. In the early morning hours of January 1, 1959, President Batista released a message to the Cuban people. He stated that, upon the advice of his generals and to avoid further bloodshed he was leaving the country. At 2:10 A.M. Batista boarded a DC-4 bound for the Dominican Republic with members of his family and those "Batistianos" who knew they could expect no mercy from the rebels. In Cuba, all was lost for Fulgencio Batista, Fidel Castro had triumphed. NOTES Chapter III: Castro's Insurrection 1/ Enrique Meneses, Fidel Castro (New York: Taplinger Publishing Company, 1966), p. 29. 2/ The material on Batista is based primarily on: Edmund A. Chester, A Sergeant Named Batista (New York: Henry Holt and Company, 1954). 3/ Unless otherwise noted, the principal sources for the background on Castro are: Jules Dubois, Fidel Castro (Indianapolis: Bobbs-Merrill Co., Inc., 1959, pp. 14-25; Herbert L. Matthews, Fidel Castro (New York: Simon and Schuster, 1969), pp. 17-62; and Meneses, op. cit., pp. 29- 38. 4/ Ramon L. Bonachea and Marta San Martin, The Cuban Insurrection: 1952-1959, (New Brunswick: Transaction Books, 1974), p. 10. A birth year for Fidel Castro of 1926 is the popularly accepted year used by most of Castro's biographers. However, Bonachea and San Martin cite convincing evidence, in the form of a certifying letter from Castro's mother, that attests to Fidel's birthyear as 1927. 5/ Carlos Franqui, Diary of the Cuban Revolution, (New York: Viking Press, 1976), pp. 1-2. Extracted from an interview taken by Franqui. 6/ This fact is noted by all three of the above principal sources cited in note 3, although Meneses explains that only Castro's detractors make this accusation. 7/ Matthews, op. cit., p. 21. 8/ Dubois, op. cit., p. 15. 9/ Theodore Draper, Castroism: Theory and Practice (New York: Frederick A. Praeger, Inc., 1965), p. 114. 10/ Franqui, op. cit., p. 1. 11/ Meneses, op. cit., p. 38. 12/ The principal sources for this account of the assault on the Moncada Barracks are: Franqui, op. cit., pp. 43-64; Meneses, op. cit., pp. 37-40; Bonachea, et. al., op. cit., pp. 17-28; Matthews, op. cit., pp. 63-77; and Dubois, op. cit., pp. 30-83. 13/ Matthews, op. cit., p. 63. In his conclusion to his defense, Castro argued that he had the right to rebel against tyranny as was guaranteed by article 40 of the 1940 Constitution. Judge Manual Urrutia, during the 1957 trial of some of the Granma prisoners, used the same reasoning in his refusal to condemn the accused to death. Urrutia's decision ended his career as a judge and caused his exile, but laid the groundwork for Fidel to chose him, in 1958, as the future President of Cuba. (Matthews, pp. 75-76). 14/ Dubois, op. cit., pp. 84-137; and Bonachea, et. al., op. cit., pp. 34-79. 15/ Meneses, op. cit., p. 39. 16/ Ibid, p. 40. 17/ Wyatt Mac Gaffey and Clifford R. Barnett, Cuba: Its People, Its Society, Its Culture (New Haven: ARAF Press, 1962), p. 235. Although Castro continued to express loyalty to the Ortodoxo Party, he failed to gain the level of Party leadership he had anticipated upon his release from prison. according to Meneses, op. cit., p. 42, indications were that the Ortodoxo Party had split, and Castro had followed the more revolutionary faction. Apparently the more conservative (and largest) element of the Party did not agree with Fidel's advocation of violence to overthrow Batista. 18/ Franqui, op. cit., p. 90. 19/ Martin Ebon, Chea The Making of a Legend (New York: Universe Books, 1969), pp. 7-35. 20/ Meneses, op.cit., p. 41. 21/ Fidel Castro, History Will Absolve Me (New York: Lyle Stuart, 1961), pp. 35-36. This publication is said to be a reprint of the pamphlet that was circulated following the Moncada trial. 22/ Bonachea, et. al., op. cit., p. 65. 23/ Unless otherwise noted, the principal sources for the Sierra Maestra phase of Castro's revolution are: Dubois, op. cit., pp. 139-324; Bonachea, et. al., pp. 79- 197; and Matthews, op. cit., pp. 93-125. 24/ Ernesto Guevara, Reminiscences of the Cuban Revolutionary War, (New York: Grove Press, 1968), p. 41. 25/Fulgencio Batista, Cuba Betrayed, (New York: Vantage, 1962), p. 51. In his own account of this period, Batista implies that his artillery attacked Castro's forces as soon as they landed, inflicting significant casualties. His account is vague, however, and disagrees with all other sources which place the attack several days later. 26/ More than anything else, this practice by the Cuban Army of executing prisoners caused the rebels, in future battles, to fight to the death. 27/ R. Hart Phillips, Cuba: Island of Paradox (New York: McDowell, Obolensky, n.d.), pp. 289-291. Hart and others indicate that the Matthew's interview was a significant turning point in Castro's insurrection. Prior to its release, many Cubans believed Castro to be dead. After the interview was published, fighters with food and weapons began to stream into the Sierra Maestra seeking to join Fidel. 28/ John Dorschner and Roberto Fabricio, The Winds of December (New York: Coward, McCann & Geoghegan, 1980), p. 34. 29/ Batista, op. cit., p. 51. Batista chose to ignore any activity by Castro or his men until after Matthews' article appeared in February, calling this attack the "act of bandits." 30/ As soon as the Cuban Army pressure against the rebels relaxed, Guevara abandoned Castro's technique of roaming the mountains, and remaining constantly on the move. Throughout the major portion of the rebellion "Che" maintained a base camp in a valley near Pico Turquino. He established a hospital, armament workshop, tailor's shop, bakery and newspaper. 32/ Meneses, op. cit., pp. 56-57. 33/ Ibid., p. 57. 34/ Ernesto Guevara, Episodes of the Revolutionary War (New York: International Publishers, 1968), p. 69. 35/ Ray Brenman, Castro, Cuba and Justice (New York: Doubleday and Co., 1959), pp. 20-21. Not all of Batista's measures were as harsh. More subtle reprisals such as neglect of public schools, garbage collection and street repairs were also used. 36/ Manuel Urrutia Lleo, Fidel Castro & Company, Inc. (New York: Frederick A. Praeger, 1964), pp. 3-4. 37/ Earl E.T. Smith, The Fourth Floor (New York Random House, 1962), p. 31. 38/ Bonachea, et. al., op. cit., p. 166. 39/ Smith, op. cit., p. 71. 40/ Ibid., pp. 74-76. 41/ Robert Taber, M-26: Biography of a Revolution (New York: Lyle Stuart, 1961), p. 30. 42/ Batista opponents had been petitioning the U.S. State Department for sometime to stop the flow of arms to Batista. In addition, there was considerable disagreement within Congress concerning the same subject. As a point of information, U.S. arms shipments to Cuba had actually stopped several months before the official announcement. Only Batista's insistence upon delivery of some 20 armored cars he had previously been promised brought the issue to a climax. See Urrutia, op cit., pp. 17-18 and Smith, op. cit., passim. 43/ Guevara, op cit., p. 124. At least two unasso- ciated groups were known to be operating in the Escambray in early 1958. One, built on the remnants of the Cienfuegos disaster, and the other comprised of the survivors of the 1957 palace attack. 44/ Unless otherwise cited, the principal source is Bonachea, et. al., op. cit., pp. 198-317. 45/ Bonachea, et. al., ibid., pa 248. 46/ Ibid., p. 257. 47/ The principal sources for the discussion of the final campaign area Dubois, op. cit., pp. 302-351; Bonachea, et. al., op. cit., pp. 266-301; Matthews, op. cit., pp. 127-130; and Dorschner, et. al., op. cit., pp. 81- 185. 48/ Bonachea, et. al., op. cit., p. 273. Extracted from a confidential set of instructions issued by Colonel Suarez Suquet, commander of the Camaguey Rural Guards Regiment. 49/ Franqui, op. cit., p. 416. 50/ Bonachea, et. al., op. cit., p. 299. 51/ The principal sources for the climax of Castro's rebellion are: Meneses, op. cit., pp. 85-86; Bonachea, et. al., op. cit., pp. 302-317; and Franqui, op. cit., pp. 481- 506. 52/ Bonachea, et. al., op. cit., p. 307. CHAPTER IV: CASTRO'S REVOLUTION 1/ Cuba's new government was essentially stillborn. When General Cantillo informed Supreme Court Magistrate Carlos Manuel Piedra, a septuagenarian, that he was the new president, Piedra is reported to have said, "Now what do we do, General?" 2/ Cantillo suggested that they call together some advisors and attempt to form a government. Before dawn on January 1st, Cantillo, Piedra and several handpicked civilians met to discuss the situation. All of those present represented the passing of an era; and, confronting a desperate situation (they) launched into long, rhetorical discourses. While the speakers reminisced about the 1933 Revolution, World War II and Batista's regime ... , Cantillo observed that, 'the whole structure of the armed forces was falling apart while the old men discussed irrelevancies.' 3/ At mid-day, Cantillo suggested that the group move to officially install Piedra as president by gaining the approval of the Supreme Court: However, the Court refused to legitimize Piedra, citing the legal principle that Batista's resignation was the result of a victorious revolution and not the normal course of events; therefore, the revolution was the font of law, leaving the insurgents in the position of organizaing their own government. Once learning of the Courts decision, Piedra told Cantillo that he could not serve as president without his fellow jurists' acceptance. When the news of Cantillo's failure to form a government reached the M-26-7 urban underground, they moved to take control of Havana's streets, government buildings and police precincts. By the end of the day, the underground controlled most of the city. At Long Last; Victory Castro did not learn of Batista's departure until about 9:00 AM on January 1st. Hearing too of Cantillo's attempts to form a civilian or military junta, Fidel knew he had no time to waste in consolidating his position. Consequently, he immediately ordered "Che" Guevara and Camilo Cienfuegos to proceed to Havana to consolidate M-26-7 control of the capital. Immediately thereafter, he delivered a dramatic radio address to the Cuban people, alerting them to Batista's departure and Cantillo's unlawful attempts to take over the government. In the same speech, he warned the workers to be prepared for a general strike to counteract Cantillo's scheme. Finally, Castro ordered his forces to immediately march on Santiago. News of the events in Havana also reached Santiago early on January 1st. The city was under regular army control, and a battle seemed inevitable. As the guerrilla army approached the city, Castro released a communique to the Santiago garrison: the army was to surrender before 6:00 PM, or his guerrilla forces would take the city by assault. The commander of the regular forces, Colonel Rego Rubido, flew by helicopter to see Castro. Acknowledging that in light of the events in the capital further bloodshed was useless, Rubido agreed to allow Castro to enter Santiago de Cuba unopposed. In turn, to placate Rubido's officers and troops as well as to ensure their neutrality, Fidel appointed Rubido commander-in-chief of the revolutionary army in Santiago. The city fell into Castro's hands on January 2, 1959. At 1:30 AM that same morning, Castro made his first speech to a large crowd. He spoke from the wall of the same Moncada Fortress where the M-26-7 Movement has begun five and one-half years before. Fidel Castro had fulfilled his promise to liberate Cuba. He had started with 200 men who were reduced to seventy; seventy who, with another twelve, made up the eighty-two who disembarked at Belic; eighty- two, of whom twelve remained at the end of the first week in the Sierra; twelve, who in twenty- five months had wiped out an army of 30,000 professional soldiers. Fidel Castro was to go through many emotional moments on his journey along the length of the island to the Presidential Palace in Havana, but perhaps none was so significant, so full of drama, as when he spoke at Moncada on that morning of the 2nd January 1959. Next to him stood the new President of Cuba, Manuel Urrutia Lleo, and on his other side Monsignor Enrique Perez Serantes, Archbishop of Santiago de Cuba, the man who had not only baptized Fidel Castro, but also saved his life when Batista wanted to eliminate him after the unsuccessful attack on Moncada. 4/ Castro began his long march to Havana on January 2, 1959. The progress of the march was tediously slow, but Fidel was in no hurry. President Urrutia had been sent ahead to install the government, and Guevara and Cienfuegos were establishing control over the capital. As for Castro, he had accepted the position of Representative of the Rebel Army. Behind this modest front, however, Castro had a well- conceived plan. He wanted to project his personality over the people and insure their support for the revolutionary changes he envisioned. He knew that a slow and triumphant march across the length of Cuba would set the island aflame with fervor. Through every town he passed Castro was greeted with wild enthusiasm. Old ladies blessed and prayed for him, and young women tried to get close enough to touch him. Fidel had become larger than life; all past "caudillos" paled before him. Meanwhile, General Cantillo was making a last-ditch attempt to consolidate his position. He reasoned, correctly, that a personal attempt to mobilize the army would fail. He needed to find someone who was both dynamic and anti-Batista. Cantillo concluded that only Colonel Ramond Barquin fit both of those requirements and ordered his release from the Isle of Pines.* Upon Barquin's arrival at Camp Columbia in Havana, Cantillo informed him of the situation. He suggested the Barquin attempt to organize the army in order to present a cohesive front to the guerrilla forces when they arrived. Barquin agreed and demonstrated his good faith by having Cantillo arrested. *Colonel Barquin had led an abortive coup attempt against Batista known as the "Conspiracy of the Pure" in late 1955. He was imprisoned for his efforts. With Cantillo out of the way, Barquin proceeded to consolidate his position. He planned to play a moderating role between the regular army and the revolutionary forces by demonstrating his neutrality and calling for compromise. Barquin soon came to the same conclusion Cantillo had reached, however; the insurgents were in no mood for compromise and any resistance would only cause unnecessary bloodshed, probably including his own. Consequently, with little ceremony, Barquin delivered command of the Havana garrison to Camilo Cienfuegos. The total victory of the insurrection was now guaranteed. The last bastion of hope was removed for those who wished to see Batista defeated, but not by Castro. The Communist State Fidel Castro finally reached Havana on January 6th, the day after the United States extended formal diplomatic recognition to President Urrutia's government. 5/ Few people paid much attention to anything Urrutia was engaged in, however; Castro was the main attraction and everything else was secondary. As Castro, surrounded by guerrillas, entered the capital, emotion reached incalculable heights. Banners and flags hung from almost every building in Havana. The national anthem was heard from loud speakers all along the way, as was the M-26-7 battle hymn ... . Castro stopped at the Presidential Palace to pay his respects to Urrutia. He went to the balcony, and addressed the thousands of people who surrounded the building. An ovation that lasted close to 15 minutes welcomed the Maximum Leader.* Castro gave a short, but emotional speech. He closed by raising his right hand, and lowering his voice. The multitude quieted. In a dramatic voice he asked Cubans to open a path for him to walk through. He would show the world, he said, how disciplined Cubans were. As he moved toward the palace's exit, the people, as if enchanted, opened a path for the Maximum Leader ... . This act impressed everyone who saw the event. For customarily emotional, undisciplined Cubans, it was unprecedented. 6/ From the palace, Castro marched toward Camp Columbia where he was scheduled to present a television address to the nation. Upon arriving, he launched into an impassioned oration that lasted for hours. Castro talked about the republic and the revolution entering a new phase. He denounced the cults of personality and ambition that might endanger the revolution and cautioned the people against accepting dictatorships. Toward the end of his speech, several white doves were released as a symbol of peace. One of the doves landed on Castro's shoulder causing the crowd to fall into a deep silence. Many fell to their knees in prayer, and a general sense of awe spread throughout the throng and the nation. While Fidel spoke of the evils of caudilloism, he was being simultaneously revered as the "Savior of the Fatherland." His words were falling on deaf ears. No one doubted on that day that Castro was a man inspired with a mission, and that Cuba was on its way to restoration of the 1940 Constitution and a return to democratic reforms. *English translation of "Maximo Lider," a title bestowed on Castro by his followers. In the days and weeks that followed, Castro appeared as a man driven by euphoria. Sleeping only three hours a night, he delivered speeches everywhere and anywhere there was a crowd, no matter how small. As Castro governed from his hotel, President Urrutia and his Prime Minister Jose Miro Cardona looked on helplessly from the Presidential Palace. Cardona finally became so frustrated with the dichotomy between what the government ordered and what Castro did, that on February 13th, he resigned, suggesting that Castro take over as Prime Minister. Fidel promptly obliged. With his brother Raul as head of the armed forces, Fidel now began to assume control of the "official" destiny of Cuba. As Castro became more enamored with his own fame, he began to reject any criticism, no matter how constructive, as anti-revolutionary. Any dialogue that questioned his ideas became viewed both as a personal attack and an affront to the M-26-7. It is significant to note that the only people who seemed to sense this, and therefore did not argue with him, were the members of the Cuban Communist Party. While Castro went on expounding theoretical jibberish with few, if any, practical ideas, the communists set about quietly gaining control of the labor unions, press, radio and television. Castro's personal connection with the Cuban Communist Party prior to the end of 1958 had been virtually nil, although toward the end of the offensive, an uneasy alliance had been struck: However, at the time, Castro was accepting aide from almost every corner, thinking he could sort things out later. Further, until Batista's fall seemed inevitable, the communists had been strong supporters of his regime. Batista had needed communist support to help keep control of the workers and labor unions, and allowed them a relatively free reign of Cuban politics as long as they did not present an overt threat. Castro was now faced with much the same situation. He desparately needed someone to shadow his theoretical ideas, and quietly place them into practice without detracting from his image of being the "Maximum Leader." Behind the scenes, Raul Castro and Che Guevara, both long established Marxists, provided that practicality, aided by the Cuban Communist Party. Fidel, perhaps at first unwittingly, assisted them by continually denouncing any disruption of his "plans" as anti-revolutionary. On July 17, 1959, the Havana newspaper Revolucion published a banner headline which read: "CASTRO RESIGNS!" 7/ As expected, the country was deeply shaken. Castro really had no intention of resigning. He was only using the threat to consolidate his control over the government by removing the last of the moderates, whom he considered to be anti-revolutionary. By mid-morning of the 17th, when the people had been sufficiently aroused with many protesting against his "resignation," Castro appeared on television, announced his resignation and launched a vicious attack against President Urrutia and other moderates who were trying to derail the revolution. Since Urrutia had often publically cautioned that the communists were becoming too powerful, Castro accused the President of trying to blackmail him with the communist menace. Prior to the speech, Castro had been out of the public eye for some time. The impact of his sudden reappearance, coupled with his "resignation" and his accusations against Urrutia was electric; the people clamored for the President's resignation. That same night, Urrutia sought protection in the Venezuelan Embassy. It had been his resignation, not Castro's, that had been accepted by the Revolutionary Cabinet. He was replaced by an obscure communist named Osvaldo Dorticos Torrado. From July 17th forward, Fidel Castro controlled the Cuban people while the Cuban Communist Party controlled the country. NOTES Chapter IV: Castro's Revolution 1/ Unless otherwise noted, the principal sources are: Enrique Meneses, Fidel Castro (New York: Taplinger Publishing Company, 1966), pp. 85-101; Ramon L. Bonachea and Marta San Martin, The Cuban Insurrection: 1952-1959 (New Brunswick: Transaction Books, 1974), pp. 313-331; John Dorschner and Robert Fabricio, The Winds of December (New York: Coward, McCann & Geoghegan, 1980), pp. 251-494; Manual Urrutia Lleo, Fidel Castro & Company, Inc., (New York: Frederick A. Praeger, 1964), pp. 3-54; and Irving Peter Pflaum, Tragic Island: How Communism Came to Cuba (Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1961), pp. 1- 14, 28-81. 2/ Bonachea, et. al., op. cit., p. 313. 3/ Ibid. 4/ Meneses, op. cit., p. 87. 5/ Earl E.T. Smith, The Fourth Floor (New York: Random House, 1962), pp. 198-199. 6/ Bonachea, et. al., op. cit., p. 329. 7/ Meneses, op. cit., p. 97. CHAPTER V: ANALYSES AND CONCLUSION Fidel Castro won because he had a better plan, better tactics and better organization; Fulgencio Batista lost because he did not. Castro won because he had an idea whose time had come; Batista lost because his idea was no longer supportable. Castro won because he never quit; Batista lost because he did. In trying to discover the reasons for Fidel Castro's success, these comparisons may seem superficial, but they are not. Castro won because he developed and waged an effective guerrilla war; Batista lost because he could not mount a meaningful counter. Castro was successful because he neutralized the impact of the United States; Batista failed because he could not retain Washington's support. Finally, Castro won because he made the Cuban people believe in him; Batista lost because he could not hold their faith. The remainder of this chapter will concentrate on an analysis of the 1953-1959 Cuban Revolution with the objective of determining why Castro won and Batista lost. Four areas will be examined: Castro's guerrilla warfare technique, Batista's counter-insurgency policies, the role of the United States and the impact of ideology and charisma. Guerrilla Warfare a la Castro Prior to Fidel Castro, the traditional method of removing Latin American leaders had been by "golpe de estado" ("coup d'etat") or palace revolution. Generally, a small military detachment would occupy government buildings and the governmental leader and his associates would seek asylum -- usually in a foreign embassy. Then, with little ceremony, a new leader would proclaim himself in control of the government. More than 30 Latin American leaders were deposed by this technique between 1945 and 1955. 1/ Considering the inherent restrictions of a "golpe de estado," these revolts, while abruptly ending the tenure of political leaders and their followers, usually did not upset the prevailing patterns of social, economic or military relations. If Fidel Castro's assault on the Moncada Fortress had been successful, it is quite possible that Fulgencio Batista would have been removed as President, thus establishing Castro as the catalyst for a "golpe de estado." Since the attack on the Moncada Fortress was unsuccessful, Castro turned to a type of combat virtually unknown in Latin America -- guerrilla warfare. Whatever his intensions, Castro's commitment to this form of struggle implied a long military campaign, sweeping social reforms and major economic change. Revolutionary Doctrine. This is not to say that Castro and his lieutenants did not know what they were doing. On the contrary, several of the cadre of leaders that surrounded Castro were quite familiar with the principles of guerrilla tactics. Men such as "Che" Guevara and Alberto Bayo had studied the guerrilla warfare doctrines of both Mao and Giap. Bayo, and especially Guevara, became quite adept at altering established guerrilla tactical theory to suit Cuba's social conditions and terrain. They recognized the need to recruit young people who could endure the hardships of guerrilla fighting. Considering Batista's repressive practices, volunteers were easy to find. They quickly learned that surprise, hit and run and other highly mobile tactics were well suited to their small numbers and rural surroundings. They discovered the role of deception and became experts at setting ambushes. Perhaps most important, they understood the value of intelligence and were quick to establish an effective network throughout the island. Still, guerrilla warfare is not solely a military problem. Tactics, training, intelligence and strategy are not enough. Military operations are only one component in an overall system of insurgency. To be totally effective, combat actions must be coordinated with political, economic, social and psychological variables as well a Guerrilla warfare, for example, would not have proved a sufficient condition for success if other variables had not neutralized the power of the United States. 2/ Castro understood this symbiotic relationship and frequently demonstrated its application through the coordinated use of propaganda, urban underground activities and rural guerrilla attacks. "Che" Guevara. Guevara's contribution to Castro's success cannot be overemphasized. Teacher, tactician and warrior, he knew the importance of the sanctuary that was afforded by the Sierra Maestra and cautioned Castro not to leave it until the time was right. Guevara recognized the necessity of attention to detail, such as establishing hospitals and schools for the rural people who supported the guerrillas with food, shelter and intelligence. He and Castro both insisted that the local people never be abused. Food and supplies were never confiscated; the guerrillas always paid. Guevara and Raul Castro implemented plans to spread propaganda by radio and clandestine newspapers, carefully explaining Fidel Castro's revolutionary platform to thousands who previously had been unaware of what the rebels were fighting for. 3/ If Castro was the heart of the revolution, Guevara became its soul. Urban Guerrilla Organizations. As mentioned previously in the study, the Cuban Revolution was not the sole province of Fidel Castro and his rural guerrillas. Several urban organizations existed. Most were either directly under the control of M-26-7 (and thus in coordination with Castro), or nominally associated. These groups may have been the real heros of the Cuban Revolution because they took the brunt of any reprisals Batista's forces administered. Following every act of revolutionary sabotage or terrorism and every guerrilla success in the field, known members of the various urban undergrounds were sought out, and, if caught, executed. This counter-guerrilla technique proved highly successful. As a result, urban guerrilla activity was limited until the later months of the revolution; groups became fragmented and coordination, either among the numerous cadres or with Fidel Castro, became almost non- existent. Nevertheless, urban guerrilla organizations kept pressure on Batista until the end, providing Castro with valuable intelligence and smoothing the way for his eventual takeover of Havana and the island. Role of the Middle-Class. The Cuban Insurrection has often been publicized as a middle-class rebellion. This is an arguable point. Castro spent a considerable amount of time and rhetoric trying to convince the rural sugarmill workers and peasants to join his movement. His ideas on agricultural reform were based upon the precept of giving land to the landless and were specifically intended to attract rural lower-class subvention. Initially, he was not very successful in gaining peasant support beyond the immediate vicinity of the Sierra Maestra. Government propaganda, poor communications from Castro's headquarters and the factionalism of the revolutionary groups made it virtually impossible for the rural inhabitants to get a clear picture of what was happening. It was not until after Batista's summer offensive that rural support for the guerrillas became prominent. By then, Castro had been so successful that government anti-guerrilla propaganda was ineffectual. In addition, by mid-1958, most of the major revolutionary grups were either consolidated under Castro's control or coordinating their efforts with his. Also, "Radio Rebelde," Castro's short-wave radio station, was broadcasting the "revolutionary truth" throughout the island. Castro's initial failure to generate widespread interest among Cuba's rural poor held his rebellion in check for several months because of manpower shortages. Perhaps more importantly, it had the effect of giving the movement the personnel characteristics which ultimately accounted for its reputation as a middle-class revolt. The leaders of the guerrilla columns and many of their troops had backgrounds traceable to middle-class professions; the urban underground organizations were almost exclusively middle-class; and most of the financial support generated both at home and abroad came from middle-class pockets. Further, Castro deliberately did not antagonize Cuba's middle-class. In fact, he was careful to cultivate their support and sympathy by exploiting their hatred of Batista, promising free elections and the return of civil liberties, and avoiding social, economic and political statements which might alienate them from his cause. Even after he gained power, he transiently rewarded non-Communist, middle-class supporters with offical governmental positions. Ironically, the Cuban middle-class who had originally supported Castro eventually swelled the ranks of the exiled. Propaganda. In the hands of an expert, publicity can be a powerful weapon. Information about guerrilla leaders and their exploits, if handled properly, may serve to gain sympathy, attract recruits and create doubts about the effectiveness of the established government. The interviews between Castro and Matthews not only accomplished these objectives, but contradicted Batista's contentions that Castro was dead. Thanks to the efforts of Guevara, Castro learned the value of propaganda. As the guerrillas became more organized, Castro began to soften his statements for wider appeal. In addition, he shifted emphasis away from broad entreatment of the general population to specific solicitation of the rural lower-class. Coincidently, several revolutionary newspapers, bulletins and leaflets began to appear throughout the country, each touting the motives and successes of the revolution and the tyrantical excesses of the government. However, the most influential propaganda device used by Castro was "Radio Rebelde." First operated in February 1958, the station became an excellent tool by which Castro could personally reach the masses. Every night, exaggerated news of guerrilla victories and proclamations were broadcast throughout the island. In great oratorical style, Castro exhorted Cubans not to fear his revolution; denying Batista's charges that he was a communist and that M-26-7 was a communist movement. The broadcasts became so effective that Batista resorted to jamming the transmissions and simulating rebel broadcasts over the same frequency to counter the guerrilla propagand. 4/ Summary. In the final anaylysis, Castro's brand of guerrilla warfare did not depart dramatically from that of Mao or Giap. He and Guevara merely modified their theories to fit the Cuban scenerio. Neither was the middle-class nature of the Cuban Revolution unique. As has been previously pointed out, many revolutions, including the American, French and Russian, had deep, middle-class roots. What did set Castro's revolution apart, however, was its departure from the more traditional forms of Latin American insurrections. Events since 1959 have demonstrated that that lesson has not been wasted elsewhere in the Caribbean Basin. Internal Defense Some contend that Fidel Castro did not win, but rather, that President Batista lost. There is in fact evidence to support the contention that Batista never realized the magnitude of Castro's insurgency. It appears that he initially saw Castro as just another rival for political power who, although popular, had no widespread base of support among he Cuban population. Many of the decisions Batista made during the insurrection suggest that he was more concerned with maintaining the tenuous hold he had on the country than eradicating Castro. Internal Security Forces. 5/ Batista's counter- guerrilla forces numbered as high as 40,000 men and were composed of civilian police, paramilitary forces and military forces. The National Cuban Police Force was built around seven militarized divisions of approximately 1,000 men each. One division was assigned to each of Cuba's six provinces, with a central division maintained in Havana. The force was under the command of the Minister of Defense. In addition to the National Police, the Department of Interior and Justice also maintained police forces which primarily handled undercover activites. The Rural Guard Corps was a separate paramilitary force operated under the direction of the Chief of Staff of the Cuban Army. Its activities were much like that of a national guard or reserve. It was frequently mobilized to help control demonstrations or strikes and came into extensive use in the latter stages of the revolution. The actions of all these forces were closely coordinated with regular army operations. While the structure of Cuba's regular armed forces has already been discussed, it is significant to note that all of Cuba's internal security forces shared a common weakness: inability to fight a protracted war, especially a counter- guerrilla war. This condition was understandable considering the historical nature of revolutions in Latin America, and certainly not confined to Cuba. In the words of Fidel Castro: The Armies of Latin America are unnecessary if it's a guestion of this part of the world lining up against Russia. None of them is strong enough, and if the occasion arises, the United States will give us all the armaments we need. So, why do the Armies exist? Very simple: to maintain dictatorial regimes and let the United States sell them the old arms they don't need anymore .... The Army today, in Latin America, is an instrument of oppression and a cause of disproportionate expediture for countries whose economies cannot afford it. 6/ Counter-Insurgency Policy. Batista's initial counter- insurgency policy was denial. Until Castro's successful attack upon the Ulbero garrison in May 1957, the government's official policy was that no rebel forces existed. Unofficially though, Batista had begun expanding his armed forces soon after the Matthews interview. Batista's approach to combating the insurgency soon formed into two objectives. First to contain and then defeat the guerrillas in the mountains, and second, to maintain law and order in the cities. The regular armed forces and the Rural Guard Corps were given the primary responsiblity of combating the guerrillas in the field. Their tactics were military oriented, conventional and largely ineffective. The rebels seldom defended the terrain over which they fought; they merely withdrew. Consequently, traditional military tactics using armor and aircraft were of limited value. Hand grenades and machineguns proved to be the most useful weapons. 7/ Except for brief campaigns and forays, Batista's military strategy was generally one of containment. This is amazing when one considers the degree to which government forces outnumbered the guerrillas, especially in the early months of the conflict. In spite of their lack of training, it is difficult to understand why several thousand well- equipped soldiers could not have overrun the Sierra Maestra and killed or captured a few dozen poorly-armed guerrillas. In fact, just the opposite always seemed to happen. With few exceptions, Batista's soldiers proved time and again that they had no heart for fighting, and at the first sign of trouble they usually ran. In general, their officers were incapable of inspiring better performance. Meanwhile, the National Police were given the function of maintaining law and order within the urban areas. Unlike the regular military, they initially dispatched their duties with considerable effectiveness. Able to infiltrate many of the urban guerrilla organizations, the National Police conducted a mercilous campaign of counter-revolutionary techniques that included indiscriminate arrests, torture and murder. These countermeasures were so successful that urban activities were severely curtailed through all but the last months of the revolution. 8/ Insurgent countermeasures emphasizing terrorism were not confined to the cities. Batista's military forces were fond of torturing and summarily executing rebel prisoners, in marked contrast to Castro's policy of returning government prisoners unharmed. The same dissimilarity applied to the treatment of civilian. While Castro's troops were always courteous and honest in their dealings with civilians, government forces were usually contemptuous and brutal. Torture and executions only made the rebels more determined to fight to the death. Castro and the M-26-7 underground were quick to capitalize on the negative aspect of Batista's terrorist-style countermeasures by ensuring that accounts and photographs of the atrocities were widely circulated. Chances are that Batista's decision to use terrorism against the guerrillas and guerrilla sympathizers probably cost him his job. It certainly cost him the support of the Cuban people. Summary. Batista's internal defense plan can be summarized simply by saying that he really did not have one. Initially, he refused to acknowledge the threat that Castro imposed. Even after he apparently resolved to deal directly with the rebel leader, he was unable to bring his military might to bear on the greatly outnumbered guerrillas. The only anti-guerrilla successes his forces experienced were those spawned by terrorist activities far worse than anything the rebels were carrying out. In the end, these too proved ineffective. In fairness to Batista, he was attempting to counter a style of warfare that was totally unfamiliar to Latin America and most of the world. Proven tactics were generally unavailable. In addition, Batista was burdened with trying to maintain control of an unstable political situation of his own making. In effect, he was fighting a war on two fronts and largely unequipped to deal with either. Modern wisdom, in retrospect, would suggest that the answers to both of his problems could have been found in civil,not military measures. Had he chosen to conduct a social, political and economic revolution of his own by returning the country to the precepts of the 1940. Constitution, Castro would not have had much of a foundation on which to base his revolution. Instead, Batista chose repression, martial law, terrorism and inept military campaigns. Again in retrospect, it is not surprising that he lost. Neutralization of the United States 9/ It is impossible to rank the various factors which lead to Fidel Castro's rise to power. However, the neutralization of the United States as an effective supporter of Batista would have to be listed as a variable of extreme importance. Despite his charisma, tactics and popular appeal, Castro's quest may well have proved fruitless if the power of the United States had been applied to his downfall with unqualified vigor. As it happened, however, the capabilities of the United States were applied neither in Castro's direction nor away from it: American power was neutralized. This moderation can be attributed to three factors which greatly influenced U.S. policy toward Cuba. Batista's Negative Image. As the Cuban Revolution intensified, a cluster of negative images became attached to Batista. He was associated with repression and terrorism and portrayed as a leader who profited from corruption. Even Batista's supporters in the United States found themselves compelled to apologize for the nature of his regime. As a last resort, they appealed to Americans on the grounds that, at the very least, Batista was hostile to communism. However, the inclination of the American press to take a dim view toward dictators ultimately placed so much negative publicity on Batista that Washington had no choice but to curb enthusiasm and assistance toward the Cuban President. Ultimately, the United States withdrew military support from the Batista regime, thus hastening its demise. Castro's Confused Image. Washington's frigidity toward Batista did not mean that the United States embraced Castro. Instead, American leaders became preoccupied with the paradoxes of Castro's career. Was Castro a communist? Was he a nationalist? Did he really plan to restore Cuba to democratic ideals as he had promised? If he was not a communist, why did he ally himself with such avowed communists as his brother Raul and "Che" Guevara? If he was a communist, why was he scorned by most of the communist parties in Latin America, including the Cuban Communist Party? If he was a communist, why did responsible journalists such as Herbert Matthews portray him in such sympathetic terms? Obsession with these questions presented a blurred image of Castro. Unable to reconcile the numerous contradictions in his background or rhetoric, the United States became powerless to classify him as either friend or foe. Without that distinction, Washington could not decide whether to support his ascension or impede his progress. Upon reflection, it is highly unlikely that the American government would have been faced with this dilemma if Castro had announced in the late 1950's -- as he did in 1961 -- that he was a confirmed Marxist-Leninist. Policy Ambiguity. As suggested, the contradictory image of Castro combined with the tarnished image of Batista brought irresolution to America's Cuban policy. As a result, the United States neither offered Castro the kind of massive assistance that may have guaranteed reciprocal obligations, nor continued to support Batista's regime with economic and military sanctions which may have guaranteed his survival. Although the United States never diplomatically abandoned Batista, Washington's ambivalence doomed American's Cuban policy and the Cuban President to failure. Consequently, during most of the late 1950's, America's power, which might have proved decisive to the fate of Fidel Castro, was neutralized. Castro was allowed to consolidate his power with little or no assistance from the United States as American leaders failed to establish a claim to the benefits due a friend, much less assert the dominance of a militarily superior foe. Summary. The objective of American foreign policy toward Cuba during the 1950's was really no different then it had been since the early 1930's. Through all of Cuba's political turmoil since that time, Washington had always placed itself on the side of stability. The United States supported whichever Cuban leader demonstrated the greatest ability to guarantee domestic, and thus economic and military, tranquility. While Washington was not particularly enamored with Batista's 1952 coup, it could not ignore the fact that Batista was in an excellent position to ensure the security of American interests on the island. Even after Batista's regime began to show signs of failure, American leaders were unwilling to totally desert the foreign policy formula that had been so successful. Consequently, they resorted to a sort of non-policy in the hope that Batista would be deposed, but not by Castro. Unfortunately, their decision proved wrong and the United States was left out in the cold. El Caudillo 10/ While the guerrilla movements of Asia and Africa share many similarities with the Cuban Revolution, each also incorporates diverse and unique elements. The fact that Cuba is an island, for example, introduced special geographic variables that had an impact upon the strategy, tactics and patterns of logistical support for both factions. Similarly, guerrilla leaders may often exhibit charismatic qualities, yet Fidel Castro remains a distinct individual with traits and characteristics which distinguish him from other guerrilla leaders. In fact, it is the uniqueness of Fidel Castro that may have been the overriding factor which caused his revolution to succeed where several others had failed. Charismatic Leadership. That Fidel Castro qualifies as a charismatic leader is hard to dispute. His political style has always been colorful, extreme, flamboyant and theatrical. He disdains established conventions and routine procedures, and conspicuously departs from organizational norms of behavior and appearance. Castro instilled among his men an absolute certainty of final victory. He was never -- even in the worst of times -- pessimistic. For him, victory was always around the corner, and one final push was all that was necessary to attain what others had never reached. A mystique about his capacity to overcome adversities surrounded him. He was a man who inspired legends. Youthful, idealistic and audacious, he emulated the great Cuban revolutionaries of the past, thus capturing the imagination of the Cuban people. Notwithstanding the above, it is difficult to understand how one man can totally mesmerize an entire population. Three special conditions which are applicable to contemporary Cuba may hold the key. First, Third World countries with large rural populations often show a propensity to gravitate toward charismatic leaders. Many features of the Cuban economy, despite the country's large urban population, qualify Cuba as an underdeveloped country. Second, traditionally, Latin American countries have not formed their political conflicts along the lines of political parties. Rather, most Latin American political conflicts have assumed the form of struggles between two strong leaders or "caudillos." Cuba's history abounds with political movements built around personality cults of this nature. Batista and Castro are the most recent examples. The third, and last, condition involves the morale of the guerrilla fighter. His ability to carry on under adverse conditions is particularly dependent upon his exalted view of his leader. Fidel Castro could evoke intense emotional responses of faith and loyalty. Summary. Castro's image was that of a romantic fighter. He was a man who, true to the great traditions of Cuban revolutionaires, would know how to die, fighting with valor and dignity until the end. Most Cubans assumed that, like those revolutionaries before him, Fidel would one day be killed while fighting for their freedom. This fatalistic view of the future of all great "caudillos" was what fueled the mystique surrounding Castro. However, Fidel Castro was a dedicated insurrectionist, born for action and command, but definitely not martyrdom. It was his uncompromising belief in his own destiny that eventually raised him about the "caudillos" who had preceded him and established him as "El Caudillo," the supreme charismatic leader. Conclusion This study began with the premise that an examination of the circumstances surrounding the Cuban Revolution could broaden our professional understanding of the problems associated with countering insurgencies. In so doing we have explored the roots and causes of the Cuban Revolution and traced its evolution. The factors behind Castro's success and Batista's failure have, in retrospect, become all too common in Latin America and elsewhere in the world. The lessons learned from Cuba are the same as those that have been learned and relearned from Malaysia, Vietnam and Nicaragua the tenacity of the guerrilla fighter, the inadequacy of conventional warfare in a guerrilla warfare scenerio, the importance of civil measures, the complexity of the guerrilla warfare process and the impotence of Americn foreign policy to deal with most of the above. In a very real sense, the United States has not progressed very far in its capability to deal with insurrections which affect our vital interests. In Vietnam, for example, American disregard for the nonmilitary aspects of guerrilla warfare eventually cost us victory. The United States may, in fact, be doomed to relearn the lessons associated with guerrilla warfare indefinitely unless we can develop a more flexible policy which makes allowances for social and economic solutions as well as military action. Revolutionary ideas have traditionally been defeated only when countermeasures have represented better ideas. NOTES Chapter V: Analyses and Conclusion 1/ Merle Kling, "Cuba: A Case Study of Unconventional Warfare," Military Review, December 1962, p. 12 2/ See succeeding section entitled "Neutralization of the United States." 3/ These broadcasts/publications did not start until late in the campaign, but are credited with influencing many peasants and rural workers to join Castro in the last few months of the conflict. 4/ Norman A. La Charite, Case Studies in Insurgency and Revolutionary Warfare: Cuba 1953-1959 (Washington, D.C.: SORO, The American University, 1963), p. 112. 5/ Andrian H. Jones and Andrew R. Molnar, Internal Defense against Insurgency: Six Cases (Washington, D.C.: SSRI, The American University, 1966), pp. 65-71. 6/ Enrique Meneses, Fidel Castro, (New York: Taplinger Publishing Company, 1966), p. 58. 7/ La Charite op. cit., pp. 103-104. 8/ As further proof of their effectiveness, urban guerrillas suffered 20 times as many casualties as their rural counterparts. 9/ Priscilla A. Clapp, The Control of Local Conflict: Case Studies, Cuban Insurgency (1952-1959) (Waltham, Massachusetts: Bolt, Beranek and Newman, Inc., 1969), pp. 73-102; and Kling, op. cit., pp. 18-19. 10/ Ramon L. Bonachea and Marta San Martin, The Cuban Insurrection: 1952-1959 (New Brunswick, N.J.: Transaction books, 1974), pp. 100-105; and Kling, op. cit., pp. 15-16. Click here to view image BIBLIOGRAPHY Books and Special Reports Asprey, Robert B. War in the Shadows: The Guerrilla in History. 2 vols. Garden City: Doubleday & Company, Inc., 1975. Brief synopsis of Castro's insurrection in Volume II. Good overview of the rebellion. Manages to pack a lot of information in just three chapters. Index, bibliography, footnotes, maps. Batista, Fulgencio. Cuba Betrayed a New York: Vantage Press, 1962. Author*s own version of the collapse of his regime. Blames international Communist conspiracy for most of his problems. Very parochial. Frequently conflicts with other published accounts of historical events. Bonachea, Ramon L. and San Martin, Marta. The Cuban Insurrection: 1952-1959. New York Brunswick Transaction Books, 1974a Superior account of the Cuban Revolution. Very well researched and ex- tensively documented. Became one of my principal resource documents. Index, footnotes, bibliography, photos, maps. Brennan, Ray. Castro, Cuba and Justice. New York Doubleday, 1959. Newspaper correspondent offers eye-witness account of Castro's rise to power from 1953-1959. Excellent account of Batista's counter- insurgency methods. Index. Chester, Edmund A. A Sergeant Named Batista. New York: Henry Holt and company, 1954. Somewhat exhaustive, though interesting biographical sketch of Fulgencio Batista through 1953. Based upon personal inter- views of Batista and his acquaintances. Seems reasonably balanced and accurate. Aligns with other sources. Written before Castro became an issue of substance. Index. Clapp, Priscilla A. The Control of Local Conflict: Case Studies; Volume II (Latin America). Washington: ACDA, 1969. The Cuban Insurgency (1952-1959) is covered in pages 70-136. Includes an excellent appendix on weapons analysis. Footnotes. Dorschner, John and Fabricio, Roberto. The Winds of December. New York: Coward, McCann & Geoghegan, 1980. Superb and highly detailed account of the last weeks of Castro's revolution from 26 Novem- ber 1958 through 8 January 1959. Bibliography, index, map, photos. Draper, Theodore. Castroism: Theory and Practice. New York: Frederick A. Praeger, Inc., 1965. Tracks the evolution of Castroism and shows how it was applied during the early 1960's. Corre- lates Cuban history and customs with Castro's attempts to revolutionize Cuban agriculture and economy. Interesting, but of little value to the overall theme of this paper. Index. Draper, Theodore. Castro's Revolution: Myths and Realities. New York: Praeger, 1962. Good background source. Presents strong evidence that Cuban revolution was a middle-class revo- lution with little peasant support until the end. Dubois, Jules. Fidel Castro: Rebel, Liberator or Dictator? Indianapolis: Bobbs-Merrill, 1959. Excellent biography covering Castro's life through 1959. Index, photos. Ebon, Martin. Che: The Making of a Legend. New York: Universe Books, 1969. Excellent biography of Che Guevara. Chapters 1-6 were particularly useful for this paper. Appendices, index, bibliography. Estep, Raymond. The Latin American Nations Today. Maxwell AFB: Air University, 1964. Covers major Latin American developments which occurred bet- ween 1950 and 1964. Pages 85-112 address Cuba. Good section on Cuba's political party alignments during the 1950's. Index, glossary, suggested readings. Fagg, John Edwin. Cuban Haiti & The Dominican Republic. Englewood Cliffs: Prentice-Hall, Inc., 1965. Good historical sketch on Cuba, pp. 9-111. Maps, bib- liography, index. Ferguson, J. Halcro. The Revolution of Latin America. London: Thames and Hudson, 1963. Covers history of Latin American revolutions from early 1920's - 1962 with a special section devoted to the Cuban Revolution. Devoted more to the revolutionary phenomenon itself rather than details. Good dis- cussion of "Fidelismo" and its repercussions on the rest of Latin America. Ferguson is a British author and broadcaster on Latin American affairs. Book had limited value for this paper. Foreign Area Studies Division. Special Warfare Area Handbook for Cuba. Washingtona SORO, 1961. Presents social, economic, military background information intended for use in planning for psychological and unconventional warfare. Bib- liography, maps, charts. Franqui, Carlos, ed. Diary of the Cuban Revolution. New York: Viking Press, 1980. As the title suggests, the book contains numerous letters and diary excerpts from the actual participants of the Cuban Revolution including: Fidel Castro, Che Guevara and the book's editor/author who was the editor of an underground newspaper during the conflict. Arranged chronologically. Contains a brief biographical section on many of the lessor- known revolutionaries. Valuable source. Index. Guevara, Ernesto. Episodes of the Revolutionary War. New York: International Publishers, 1968. A collection of Guevara's articles describing the revolution. Includes descriptions of several battles. Guevara, Ernesto, Reminiscences of the Cuban Revolutionary War. New York: Grove Press, 1968. Translated by Victoria Ortiz. Compilation of 32 articles by Guevara. Also includes 26 letters. First hand account of several battles and the problems the rebels faced. Harris, Richard. Death of a Revolutionary: Che Guevara's Last Mission. New York; W.W. Norton & Company, Inc., 1976. Exhaustive account of Guevara's last years and days. Good biographical section. Index, map. Huberman, Leo and Sweezy, Paul M., eds. Regis Debrayy and the Latin American Revolution. New York: Monthly Review Press, 1968. Collection of essays written by or about Regis Debray. Wide variation of theories about Latin American revolutions, especially Cuba. Jones, Adrian H., and Molnar, Andrew R. Internal Defense Against Insurgency: Six Cases. Washington, D.C. a SSRI, The American University, 1966. Briefly sketches six post World War II insurgencies which occurred between 1948 and 1965. Pages 59-72 address Cuba. Good overview. Maps, footnotes, charts. Juvenal, Michael P. United States Foreign Policy Towards Cuba in This Decade. Carlisle Barracks: U.S. Army War College, 1971. Contains some information on United States-Cuban relations during the revolution, but concentrates mainly upon the 1960's. Footnotes, bibliography. La Charite, Norman. Case Studies in Insurgency and Re- volutionary Warfare: Cuba 1953 - 1959. Washington: SORO, 1963. Somewhat redundant analysis of the Cuban Revolution. Heavy emphasis on socioeconomic factors. Index, bibliography, footnotes, map. MacGaffey, Wyatt and Barnett, Clifford R. Cuba: Its People, Its Society, Its Culture. New Haven: HRAF Press, 1962. As the title suggests, a study of Cuba prior to 1960 with heavy emphasis on social and cultural conditions. Good demographic source. Index. Matthews, Herbert L. Fidel Castro. New York: Simon and Schuster, 1969. Newspaperman's account of Castro's rise to power. Based upon many personal interviews. Seems strongly biased in favor of Castro. Includes some biographical data on Castro's early life. Index. Matthews, Herbert L. The Cuban Story. New York: George Brazillier, 1961. Largely a self-aggrandizing account of the effects of the author's famous interview with Castro in 1957. Contains some valuable insights into the early stages of the insurrection. Index. McRae, Michael S. The Cuban Issue Reevaluated. Maxwell AFB: Air University, 1974. Investigates the rise of the Castro regime and its relationships with the United States, Soviet Union and the Organization of American States. Excellent discussion of Castro's evolution to communism. Footnotes, bibliography. Meneses, Enrique. Fidel Castro. New York: Taplinger Publishing Company, 1966. A Spanish reporter for the Paris-Match writes of his experiences with Castro and the Cuban Revolution. Chapters 1-8 deal specifically with the period covered by this paper and were very useful for their insights into Castro and his organization. Particularly signifi- cant because it helped to give the European view of the conflict. Index, maps, photos. Miller, William R. The Dyamics of U.S.-Cuban Relations and Their Eventuality. Maxwell AFB: Air University, 1976. Traces United States-Cuban relations through the mid-1970's. Good background on United States role during the revolution. Footnotes, biblio- graphy. Mydans, Carl and Mydans, Shelley. The Violent Peace. Kingsport: Kingsport Press, Inc., 1968. An excel- lent treatment of selected wars since 1945. Pages 248-267 deal with the Cuban Revolution. Most of the material is drawn from guotations by Sam Halper, one of the several correspondents who followed Castro around the Sierra Maestra moun- tains, trying to get a story. Index, map, excel- lent photos. Nelson, Lowry. Rural Cuba. Minneapolis: The University of Minnesota Press, 1950. Excellent source of in- formation about Cubans socioeconomic status prior to Castro's insurrection. Index. Perez, Louis A. Jr. Army Politics in Cuba, 1898-1958. Pittsburgh: University of Pittsburgh Press, 1976. Traces the creation of the Cuban Army from its inception until Castro's take over. Insightful historical analysis of role the army played during various political revolutions during that period and degree to which it came to dominate Cuban politics and government. Rich in names and details. Pflaum, Irving Peter. Tragic Island: How Communism Came to Cuba. Englewood Cliffs: Prentice-Hall, Inc., 1961. Newspaperman's account of Castro's rise to power. The author traveled extensively in Cuba in late 1958 and through much of 1960. Excellent account of role U.S. played in Batista's ouster and Castro's conversion to communism. Phillips, R. Hart. Cuba: Island of Paradox. New York: McDowell, Obolensky, u.d. Newspaper correspondent gives her impressions of events in Cuba, primarily between 1931 and 1960. Excellent "on-scene" accounts of many events. Based largely upon interviews and hearsay. Rambling style, but quite readable. Facts, especially concerning dates and specific events, are often wrong or obscure. Author seems biased in favor of Castro. Not a good source from a research stand- point except that it gives one a feel for the events from an American's viewpoint. Some good guotations. Index. Smith, Earl E.T. The Fourth Floor: An Account of the Castro Communist Revolution. New York: Random House, 1962. Former U.S. Ambassador to Cuba from 1957-59, believes that U.S. policy was at least partially responsible for Castro's victory. Discusses exten- sively his efforts to stop the insurrection. Index. Smith, Robert F. The United States and Cuba: Business and Diplomacy, 1917-1960. New York: Bookmen Association, 1960. Good background on U.S. business and diplomatic involvement in Cuba from the Spanish- American War until Castro's takeover. Index. Strode, Hudson. The Pageant of Cuba. New York: Harrison Smith and Robert Haas, 1934. Detailed history of Cuba through Batista's initial rise to power in 1933. Old photos, index, bibliography, map. Suchlicki, Jaime. Cuba: From Columbus to Castro. New York: Charles Scribner's Sons, 1974. Concise pre- Castro history, but sketchy once Castro is intro- duced. Index, bibliography, photos. Taber, Robert. M-26 Biography of a Revolution. New York: Lyle Stuart, 1961. Excellent journalist account of the revolution. Urrutia Lleo, Manuel. Fidel Castro & Company, Inc. New York: Praeger, 1964. A former President of Cuba 1959-60, and Castro's choice to lead the government following the revolution gives an account of his own attempts to establish a government following Batista's departure. Also describes Castro's coup d'etat which deposed Urrutia. Index. U.S. Army Command and General Staff College. Selected Readings on Internal Defense: Cuba 1953-59. Fort Leavenworth, Kansas: USACAGSC, 1970. Excepts from selected books and articles covering the Cuban Revo- lution. U.S. Department of Commerce. Investment in Cuba: Basic Information for United States Businessmen. Washington: GPO, 1956. Contai.ns a wide variety of facts and figures concerning Cuban commerce in the early 1950's. Periodicals Aaron, Harold R. "Why Batista Lost." Army Magazine, September 1965, pp. 64-71. Succint account of the Cuban Revolution. The author hypothesizes that Castro won because he met no meaningful opposition. Chapelle, Dickey. "How Castro Won." Marine Corps Gazette, February 1960. Excellent first-hand account of Castro's infrastructure and tactics. The author spent several months in the field, interviewing Castro and his men. Guevara, Ernesto. "La Guerra de Guerrillas." Army Magazine, March, April and May 1961: Guevara's ideas about guerrilla warfare translated and condensed by Army Magazine. Written in hindsight after Castro had succeeded. Very specific, right down to weapons, tactics, hygiene, role of women, logistics, etc. Kling, Merle. "Cuba: A Case Study of Unconventional Warfare. " Military Review, December 1962, pp. 11-22. Brief overview, excellent handling of Castro's strategy. Macaulay, Neill W. Jr. "Highway Ambush." Army Magazine, August 1964, pp. 50-56. Detailed account of a guerrilla attack in Pinar del Rio province during the latter phases of the revolution. St. George, Andrew. "A Visit With a Revolutionary." Coronet, Vol. 43, no. 4 (whole no. 256, February 1958), pp. 74- 80. Journalist's view of Castro based upon personal interviews. Heavily interspersed with Castro's quota- tions.
|Join the GlobalSecurity.org mailing list| | 1 | 2 |
<urn:uuid:e3c8633a-656d-4d7a-acbe-8ceb1a2d9af4> | You are hereThe Evidence that HIV Causes AIDS: An NIAID Fact Sheet
The Evidence that HIV Causes AIDS: An NIAID Fact Sheet
The Evidence That HIV Causes AIDS
The acquired immunodeficiency syndrome (AIDS) was first recognized in 1981 and has since become a major worldwide pandemic. AIDS is caused by the human immunodeficiency virus (HIV). By leading to the destruction and/or functional impairment of cells of the immune system, notably CD4+ T cells, HIV progressively destroys the body's ability to fight infections and certain cancers.
An HIV-infected person is diagnosed with AIDS when his or her immune system is seriously compromised and manifestations of HIV infection are severe. The U.S. Centers for Disease Control and Prevention (CDC) currently defines AIDS in an adult or adolescent age 13 years or older as the presence of one of 26 conditions indicative of severe immunosuppression associated with HIV infection, such as Pneumocystis carinii pneumonia (PCP), a condition extraordinarily rare in people without HIV infection. Most other AIDS-defining conditions are also "opportunistic infections" which rarely cause harm in healthy individuals. A diagnosis of AIDS also is given to HIV-infected individuals when their CD4+ T-cell count falls below 200 cells/cubic millimeter (mm3) of blood. Healthy adults usually have CD4+ T-cell counts of 600-1,500/mm3 of blood. In HIV-infected children younger than 13 years, the CDC definition of AIDS is similar to that in adolescents and adults, except for the addition of certain infections commonly seen in pediatric patients with HIV. (CDC. MMWR 1992;41(RR-17):1; CDC. MMWR 1994;43(RR-12):1).
In many developing countries, where diagnostic facilities may be minimal, healthcare workers use a World Health Organization (WHO) AIDS case definiton based on the presence of clinical signs associated with immune deficiency and the exclusion of other known causes of immunosuppression, such as cancer or malnutrition. An expanded WHO AIDS case definition, with a broader spectrum of clinical manifestations of HIV infection, is employed in settings where HIV antibody tests are available (WHO. Wkly Epidemiol Rec. 1994;69:273).
As of the end of 2000, an estimated 36.1 million people worldwide - 34.7 million adults and 1.4 million children younger than 15 years - were living with HIV/AIDS. Through 2000, cumulative HIV/AIDS-associated deaths worldwide numbered approximately 21.8 million - 17.5 million adults and 4.3 million children younger than 15 years. In the United States, an estimated 800,000 to 900,000 people are living with HIV infection. As of December 31, 1999, 733,374 cases of AIDS and 430,441 AIDS-related deaths had been reported to the CDC. AIDS is the fifth leading cause of death among all adults aged 25 to 44 in the United States. Among African-Americans in the 25 to 44 age group, AIDS is the leading cause of death for men and the second leading cause of death for women (UNAIDS. AIDS epidemic update: December 2000; CDC. HIV/AIDS Surveillance Report 1999;11:1; CDC. MMWR 1999;48[RR13]:1).
This document summarizes the abundant evidence that HIV causes AIDS. Questions and answers at the end of this document address the specific claims of those who assert that HIV is not the cause of AIDS.
EVIDENCE THAT HIV CAUSES AIDS
HIV fulfills Koch's postulates as the cause of AIDS.
Among many criteria used over the years to prove the link between putative pathogenic (disease-causing) agents and disease, perhaps the most-cited are Koch's postulates, developed in the late 19th century. Koch's postulates have been variously interpreted by many scientists, and modifications have been suggested to accommodate new technologies, particularly with regard to viruses (Harden. Pubbl Stn Zool Napoli [II] 1992;14:249; O'Brien, Goedert. Curr Opin Immunol 1996;8:613). However, the basic tenets remain the same, and for more than a century Koch's postulates, as listed below, have served as the litmus test for determining the cause of any epidemic disease:
- Epidemiological association: the suspected cause must be strongly associated with the disease.
- Isolation: the suspected pathogen can be isolated - and propagated - outside the host.
- Transmission pathogenesis: transfer of the suspected pathogen to an uninfected host, man or animal, produces the disease in that host.
With regard to postulate #1, numerous studies from around the world show that virtually all AIDS patients are HIV-seropositive; that is they carry antibodies that indicate HIV infection. With regard to postulate #2, modern culture techniques have allowed the isolation of HIV in virtually all AIDS patients, as well as in almost all HIV-seropositive individuals with both early- and late-stage disease. In addition, the polymerase chain (PCR) and other sophisticated molecular techniques have enabled researchers to document the presence of HIV genes in virtually all patients with AIDS, as well as in individuals in earlier stages of HIV disease.
Postulate #3 has been fulfilled in tragic incidents involving three laboratory workers with no other risk factors who have developed AIDS or severe immunosuppression after accidental exposure to concentrated, cloned HIV in the laboratory. In all three cases, HIV was isolated from the infected individual, sequenced and shown to be the infecting strain of virus. In another tragic incident, transmission of HIV from a Florida dentist to six patients has been documented by genetic analyses of virus isolated from both the dentist and the patients. The dentist and three of the patients developed AIDS and died, and at least one of the other patients has developed AIDS. Five of the patients had no HIV risk factors other than multiple visits to the dentist for invasive procedures (O'Brien, Goedert. Curr Opin Immunol 1996;8:613; O'Brien, 1997; Ciesielski et al. Ann Intern Med 1994;121:886).
In addition, through December 1999, the CDC had received reports of 56 health care workers in the United States with documented, occupationally acquired HIV infection, of whom 25 have developed AIDS in the absence of other risk factors. The development of AIDS following known HIV seroconversion also has been repeatedly observed in pediatric and adult blood transfusion cases, in mother-to-child transmission, and in studies of hemophilia, injection-drug use and sexual transmission in which seroconversion can be documented using serial blood samples (CDC. HIV AIDS Surveillance Report 1999;11:1; AIDS Knowledge Base, 1999). For example, in a 10-year study in the Netherlands, researchers followed 11 children who had become infected with HIV as neonates by small aliquots of plasma from a single HIV-infected donor. During the 10-year period, eight of the children died of AIDS. Of the remaining three children, all showed a progressive decline in cellular immunity, and two of the three had symptoms probably related to HIV infection (van den Berg et al. Acta Paediatr 1994;83:17).
Koch's postulates also have been fulfilled in animal models of human AIDS. Chimpanzees experimentally infected with HIV have developed severe immunosuppression and AIDS. In severe combined immunodeficiency (SCID) mice given a human immune system, HIV produces similar patterns of cell killing and pathogenesis as seen in people. HIV-2, a less virulent variant of HIV which causes AIDS in people, also causes an AIDS-like syndrome in baboons. More than a dozen strains of simian immunodeficiency virus (SIV), a close cousin of HIV, cause AIDS in Asian macaques. In addition, chimeric viruses known as SHIVs, which contain an SIV backbone with various HIV genes in place of the corresponding SIV genes, cause AIDS in macaques. Further strengthening the association of these viruses with AIDS, researchers have shown that SIV/SHIVs isolated from animals with AIDS cause AIDS when transmitted to uninfected animals (O'Neil et al. J Infect Dis 2000;182:1051; Aldrovandi et al. Nature 1993;363:732; Liska et al. AIDS Res Hum Retroviruses 1999;15:445; Locher et al. Arch Pathol Lab Med 1998;22:523; Hirsch et al. Virus Res 1994;32:183; Joag et al. J Virol 1996;70:3189).
AIDS and HIV infection are invariably linked in time, place and population group.
Historically, the occurence of AIDS in human populations around the world has closely followed the appearance of HIV. In the United States, the first cases of AIDS were reported in 1981 among homosexual men in New York and California, and retrospective examination of frozen blood samples from a U.S. cohort of gay men showed the presence of HIV antibodies as early as 1978, but not before then. Subsequently, in every region, country and city where AIDS has appeared, evidence of HIV infection has preceded AIDS by just a few years (CDC. MMWR 1981;30:250; CDC. MMWR 1981;30:305; Jaffe et al. Ann Intern Med 1985;103:210; U.S. Census Bureau; UNAIDS).
Many studies agree that only a single factor, HIV, predicts whether a person will develop AIDS.
Other viral infections, bacterial infections, sexual behavior patterns and drug abuse patterns do not predict who develops AIDS. Individuals from diverse backgrounds, including heterosexual men and women, homosexual men and women, hemophiliacs, sexual partners of hemophiliacs and transfusion recipients, injection-drug users and infants have all developed AIDS, with the only common denominator being their infection with HIV (NIAID, 1995).
In cohort studies, severe immunosuppression and AIDS-defining illnesses occur almost exclusively in individuals who are HIV-infected.
For example, analysis of data from more than 8,000 participants in the Multicenter AIDS Cohort Study (MACS) and the Women's Interagency HIV Study (WIHS) demonstrated that participants who were HIV-seropositive were 1,100 times more likely to develop an AIDS-associated illness than those who were HIV-seronegative. These overwhelming odds provide a clarity of association that is unusual in medical research (MACS and WIHS Principal Investigators, 2000).
In a Canadian cohort, investigators followed 715 homosexual men for a median of 8.6 years. Every case of AIDS in this cohort occurred in individuals who were HIV-seropositive. No AIDS-defining illnesses occurred in men who remained negative for HIV antibodies, despite the fact that these individuals had appreciable patterns of illicit drug use and receptive anal intercourse (Schechter et al. Lancet 1993;341:658).
Before the appearance of HIV, AIDS-related diseases such as PCP, KS and MAC were rare in developed countries; today, they are common in HIV-infected individuals.
Prior to the appearance of HIV, AIDS-related conditions such as Pneumocystis carinii pneumonia (PCP), Kaposi's sarcoma (KS) and disseminated infection with the Mycobacterium avium complex (MAC) were extraordinarily rare in the United States. In a 1967 survey, only 107 cases of PCP in the United States had been described in the medical literature, virtually all among individuals with underlying immunosuppressive conditions. Before the AIDS epidemic, the annual incidence of Kaposi's sarcoma in the United States was only 0.2 to 0.6 cases per million population, and only 32 individuals with disseminated MAC disease had been described in the medical literature (Safai. Ann NY Acad Sci 1984;437:373; Le Clair. Am Rev Respir Dis 1969;99:542; Masur. JAMA 1982;248:3013).
By the end of 1999, CDC had received reports of 166,368 HIV-infected patients in the United States with definitive diagnoses of PCP, 46,684 with definitive diagnoses of KS, and 41,873 with definitive diagnoses of disseminated MAC (personal communication).
In developing countries, patterns of both rare and endemic diseases have changed dramatically as HIV has spread, with a far greater toll now being exacted among the young and middle-aged, including well-educated members of the middle class.
In developing countries, the emergence of the HIV epidemic has dramatically changed patterns of disease in affected communities. As in developed countries, previously rare, "opportunistic" diseases such as PCP and certain forms of meningitis have become more commonplace. In addition, as HIV seroprevalence rates have risen, there have been significant increases in the burden of endemic conditions such as tuberculosis (TB), particularly among young people. For example, as HIV seroprevalence increased sharply in Blantyre, Malawi from 1986 to 1995, tuberculosis admissions at the city's main hospital rose more than 400 percent, with the largest increase in cases among children and young adults. In the rural Hlabisa District of South Africa, admissions to tuberculosis wards increased 360 percent from 1992 to 1998, concomitant with a steep rise in HIV seroprevalence. High rates of mortality due to endemic conditions such as TB, diarrheal diseases and wasting syndromes, formerly confined to the elderly and malnourished, are now common among HIV-infected young and middle-aged people in many developing countries (UNAIDS, 2000; Harries et al. Int J Tuberc Lung Dis 1997;1:346; Floyd et al. JAMA 1999;282:1087).
In studies conducted in both developing and developed countries, death rates are markedly higher among HIV-seropositive individuals than among HIV-seronegative individuals.
For example, Nunn and colleagues (BMJ 1997;315:767) assessed the impact of HIV infection over five years in a rural population in the Masaka District of Uganda. Among 8,833 individuals of all ages who had an unambiguous result on testing for HIV-antibodies (either 2 or 3 different test kits were used for blood samples from each individual), HIV-seropositive people were 16 times more likely to die over five years than HIV-seronegative people (see table). Among individuals ages 25 to 34, HIV-seropositive people were 27 times more likely to die than HIV-seronegative people.
In another study in Uganda, 19,983 adults in the rural Rakai District were followed for 10 to 30 months (Sewankambo et al. AIDS 2000;14:2391). In this cohort, HIV-seropositive people were 20 times more likely to die than HIV-seronegative people during 31,432 person-years of observation.
Similar findings have emerged from other studies (Boerma et al. AIDS 1998;12(suppl 1):S3); for example,
- in Tanzania, HIV-seropositive people were 12.9 time more likely to die over two years than HIV-seronegative people (Borgdorff et al. Genitourin Med 1995;71:212)
- in Malawi, mortality over three years among children who survived the first year of life was 9.5 times higher among HIV-seropositive children than among HIV-seronegative children (Taha et al. Pediatr Infect Dis J 1999;18:689)
- in Rwanda, mortality was 21 times higher for HIV-seropositive children than for HIV-seronegative children after five years (Spira et al. Pediatrics 1999;14:e56). Among the mothers of these children, mortality was 9 times higher among HIV-seropositive women than among HIV-seronegative women in four years of follow-up (Leroy et al. J Acquir Immune Defic Syndr Hum Retrovirol 1995;9:415).
- in Cote d'Ivoire, HIV-seropositive individuals with pulmonary tuberculosis (TB) were 17 times more likely to die within six months than HIV-seronegative individuals with pulmonary TB (Ackah et al. Lancet 1995; 345:607).
- in the former Zaire (now the Democratic Republic of Congo), HIV-infected infants were 11 times more likely to die from diarrhea than uninfected infants (Thea et al. NEJM 1993;329:1696).
- in South Africa, the death rate for children hospitalized with severe lower respiratory tract infections was 6.5 times higher for HIV-infected infants than for uninfected children (Madhi et al. Clin Infect Dis 2000;31:170).
Kilmarx and colleagues (Lancet 2000; 356:770) recently reported data on HIV infection and mortality in a cohort of female commercial sex workers in Chiang Rai, Thailand. Among 500 women enrolled in the study between 1991 and 1994, the mortality rate through October 1998 among women who were HIV-infected at enrollment (59 deaths among 160 HIV-infected women) was 52.7 times higher than among women who remained uninfected with HIV (2 deaths among 306 uninfected women). The mortality rate among women who became infected during the study (7 deaths among 34 seroconverting women) was 22.5 higher than among persistently uninfected women. Among the HIV-infected women, only 3 of whom received antiretroviral medications, all reported causes of death were associated with immunosuppression, whereas the reported causes of death of the two uninfected women were postpartum amniotic embolism and gunshot wound.
Excess mortality among HIV-seropositive people also has been repeatedly observed in studies in developed countries, perhaps most dramatically among hemophiliacs. For example, Darby et al. (Nature 1995;377:79) studied 6,278 hemophiliacs living in the United Kingdom during the period 1977-91. Among 2,448 individuals with severe hemophilia, the annual death rate was stable at 8 per 1,000 during 1977-84. While death rates remained stable at 8 per 1,000 from 1985-1992 among HIV-seronegative persons with severe hemophilia, deaths rose steeply among those who had become HIV-seropositive following HIV-tainted transfusions during 1979-1986, reaching 81 per 1,000 in 1991-92. Among 3,830 individuals with mild or moderate hemophilia, the pattern was similar, with an initial death rate of 4 per 1,000 in 1977-84 that remained stable among HIV-seronegative individuals but rose to 85 per 1,000 in 1991-92 among seropositive individuals.
Similar data have emerged from the Multicenter Hemophilia Cohort Study. Among 1,028 hemophiliacs followed for a median of 10.3 years, HIV-infected individuals (n=321) were 11 times more likely to die than HIV-negative subjects (n=707), with the dose of Factor VIII having no effect on survival in either group (Goedert. Lancet 1995;346:1425).
In the Multicenter AIDS Cohort Study (MACS), a 16-year study of 5,622 homosexual and bisexual men, 1,668 of 2,761 HIV-seropositive men have died (60 percent), 1,547 after a diagnosis of AIDS. In contrast, among 2,861 HIV-seronegative participants, only 66 men (2.3 percent) have died (A. Munoz, MACS, personal communication).
HIV can be detected in virtually everyone with AIDS.
Recently developed sensitive testing methods, including the polymerase chain reaction (PCR) and improved culture techniques, have enabled researchers to find HIV in patients with AIDS with few exceptions. HIV has been repeatedly isolated from the blood, semen and vaginal secretions of patients with AIDS, findings consistent with the epidemiologic data demonstrating AIDS transmission via sexual activity and contact with infected blood (Hammer et al. J Clin Microbiol 1993;31:2557; Jackson et al. J Clin Microbiol 1990;28:16).
Numerous studies of HIV-infected people have shown that high levels of infectious HIV, viral antigens, and HIV nucleic acids (DNA and RNA) in the body predict immune system deterioration and an increased risk for developing AIDS. Conversely, patients with low levels of virus have a much lower risk of developing AIDS.
For example, in an anlysis of 1,604 HIV-infected men in the Multicenter AIDS Cohort Study (MACS), the risk of a patient developing AIDS with six years was strongly associated with levels of HIV RNA in the plasma as measured by a sensitive test known as the branched-DNA signal-amplification assay (bDNA):
| Plasma RNA concentration
(copies/mL of blood)
| Proportion of patients
developing AIDS within six years
501 - 3,000
3,001 - 10,000
10,001 - 30,000
(Source: Mellors et al. Ann Intern Med 1997;126:946)
Similar associations between increasing HIV RNA levels and a greater risk of disease progression have been observed in HIV-infected children in both developed and developing countries (Palumbo et al. JAMA 1998;279:756; Taha et al. AIDS 2000;14:453).
In the very small proportion of untreated HIV-infected individuals whose disease progresses very slowly, the amount of HIV in the blood and lymph nodes is significantly lower than in HIV-infected people whose disease progression is more typical (Pantaleo et al. NEJM 1995;332:209; Cao et al. NEJM 1995;332:201; Barker et al. Blood 1998;92:3105).
The availability of potent combinations of drugs that specifically block HIV replication has dramatically improved the prognosis for HIV-infected individuals. Such an effect would not be seen if HIV did not have a central role in causing AIDS.
Clinical trials have shown that potent three-drug combinations of anti-HIV drugs, known as highly active antiretroviral therapy (HAART), can significantly reduce the incidence of AIDS and death among HIV-infected individuals as compared to previously available HIV treatment regimens (Hammer et al. NEJM 1997;337:725; Cameron et al. Lancet 1998;351:543).
Use of these potent anti-HIV combination therapies has contributed to dramatic reductions in the incidence of AIDS and AIDS-related deaths in populations where these drugs are widely available, among both adults and children (Figure 1; CDC. HIV AIDS Surveillance Report 1999;11:1; Palella et al. NEJM 1998;338:853; Mocroft et al. Lancet 1998;352:1725; Mocroft et al. Lancet 2000;356:291; Vittinghoff et al. J Infect Dis 1999;179:717; Detels et al. JAMA 1998;280:1497; de Martino et al. JAMA 2000;284:190; CASCADE Collaboration. Lancet 2000;355:1158; Hogg et al. CMAJ 1999;160:659; Schwarcz et al. Am J Epidemiol 2000;152:178; Kaplan et al. Clin Infect Dis 2000;30:S5; McNaghten et al. AIDS 1999;13:1687;).
For example, in a prospective study of more than 7,300 HIV-infected patients in 52 European outpatient clinics, the incidence of new AIDS-defining illnesses declined from 30.7 per 100 patient-years of observation in 1994 (before the availability of HAART) to 2.5 per 100 patient years in 1998, when the majority of patients received HAART (Mocroft et al. Lancet 2000;356:291).
Among HIV-infected patients who receive anti-HIV therapy, those whose viral loads are driven to low levels are much less likely to develop AIDS or die than patients who do not respond to therapy. Such an effect would not be seen if HIV did not have a central role in causing AIDS.
Clinical trials in both HIV-infected children and adults have demonstrated a link between a good virologic response to therapy (i.e. much less virus in the body) and a reduced risk of developing AIDS or dying (Montaner et al. AIDS 1998;12:F23; Palumbo et al. JAMA 1998;279:756; O'Brien et al. NEJM 1996;334:426; Katzenstein et al. NEJM 1996;335:1091; Marschner et al. J Infect Dis 1998;177:40; Hammer et al. NEJM 1997;337:725; Cameron et al. Lancet 1998;351:543).
This effect has also been seen in routine clinical practice. For example, in an analysis of 2,674 HIV-infected patients who started highly active antiretroviral therapy (HAART) in 1995-1998, 6.6 percent of patients who achieved and maintained undetectable viral loads (<400 copies/mL of blood) developed AIDS or died within 30 months, compared with 20.1 percent of patients who never achieved undetectable concentrations (Ledergerber et al. Lancet 1999;353:863).
A survey of 230,179 AIDS patients in the United States revealed only 299 HIV-seronegative individuals. An evaluation of 172 of these 299 patients found 131 actually to be seropositive; an additional 34 died before their serostatus could be confirmed (Smith et al. N Engl J Med 1993;328:373).
Numerous serosurveys show that AIDS is common in populations where many individuals have HIV antibodies. Conversely, in populations with low seroprevalence of HIV antibodies, AIDS is extremely rare.
For example, in the southern African country of Zimbabwe (population 11.4 million), more than 25 percent of adults ages 15 to 49 are estimated to be HIV antibody-positive, based on numerous studies. As of November 1999, more than 74,000 cases of AIDS in Zimbabwe had been reported to the World Health Organization (WHO). In contrast, Madagascar, an island country off the southeast coast of Africa (population 15.1 million) with a very low HIV seroprevalence rate, reported only 37 cases of AIDS to WHO through November 1999. Yet, other sexually transmitted diseases, notably syphilis, are common in Madagascar, suggesting that conditions are ripe for the spread of HIV and AIDS if the virus becomes entrenched in that country (U.S. Census Bureau; UNAIDS, 2000; WHO. Wkly Epidemiol Rec 1999;74:1; Behets et al. Lancet 1996;347:831).
The specific immunologic profile that typifies AIDS - a persistently low CD4+ T-cell count - is extraordinarily rare in the absence of HIV infection or other known cause of immunosuppression.
For example, in the NIAID-supported Multicenter AIDS Cohort Study (MACS), 22,643 CD4+ T-cell determinations in 2,713 HIV-seronegative homosexual and bisexual men revealed only one individual with a CD4+ T-cell count persistently lower than 300 cells/mm3 of blood, and this individual was receiving immunosuppressive therapy. Similar results have been reported from other studies (Vermund et al. NEJM 1993;328:442; NIAID, 1995).
Newborn infants have no behavioral risk factors for AIDS, yet many children born to HIV-infected mothers have developed AIDS and died.
Only newborns who become HIV-infected before or during birth, during breastfeeding, or (rarely) following exposure to HIV-tainted blood or blood products after birth, go on to develop the profound immunosuppression that leads to AIDS. Babies who are not HIV-infected do not develop AIDS. In the United States, 8,718 cases of AIDS among children younger than age 13 had been reported to the CDC as of December 31, 1999. Cumulative U.S. AIDS deaths among individuals younger than age 15 numbered 5,044 through December 31, 1999. Globally, UNAIDS estimates that 480,000 child deaths due to AIDS occurred in 1999 alone (CDC. HIV/AIDS Surveillance Report 1999;11:1; UNAIDS. AIDS epidemic update: June 2000).
Because many HIV-infected mothers abuse recreational drugs, some have argued that maternal drug use itself causes pediatric AIDS. However, studies have consistently shown that babies who are not HIV-infected do not develop AIDS, regardless of their mothers' drug use (European Collaborative Study. Lancet 1991;337:253; European Collaborative Study. Pediatr Infect Dis J 1997;16:1151; Abrams et al. Pediatrics 1995;96:451).
For example, a majority of the HIV-infected, pregnant women enrolled in the European Collaborative Study are current or former injection drug users. In this ongoing study, mothers and their babies are followed from birth in 10 centers in Europe. In a paper in Lancet, study investigators reported that none of 343 HIV-seronegative children born to HIV-seropositive mothers had developed AIDS or persistent immune deficiency. In contrast, among 64 seropositive children, 30 percent presented with AIDS within 6 months of age or with oral candidiasis followed rapidly by the onset of AIDS. By their first birthday, 17 percent died of HIV-related diseases (European Collaborative Study. Lancet 1991;337:253).
In a study in New York, investigators followed 84 HIV-infected and 248 HIV-uninfected infants, all born to HIV-seropositive mothers. The mothers of the two groups of infants were equally likely to be injection drug users (47 percent vs. 50 percent), and had similar rates of alcohol, tobacco, cocaine, heroin and methadone use. Of the 84 HIV-infected children, 22 died during a median follow-up period of 27.6 months, including 20 infants who died before their second birthday. Twenty-one of these deaths were classified as AIDS-related. Among the 248 uninfected children, only one death (due to child abuse) was reported during a median follow-up period of 26.1 months (Abrams et al. Pediatrics 1995;96:451).
The HIV-infected twin develops AIDS while the uninfected twin does not.
Because twins share an in utero environment and genetic relationships, similarities and differences between them can provide important insight into infectious diseases, including AIDS (Goedert. Acta Paediatr Supp 1997;421:56). Researchers have documented cases of HIV-infected mothers who have given birth to twins, one of whom is HIV-infected and the other not. The HIV-infected children developed AIDS, while the other children remained clinically and immunologically normal (Park et al. J Clin Microbiol 1987;25:1119; Menez-Bautista et al. Am J Dis Child 1986;140:678; Thomas et al. Pediatrics 1990;86:774; Young et al. Pediatr Infect Dis J 1990;9:454; Barlow and Mok. Arch Dis Child 1993;68:507; Guerrero Vazquez et al. An Esp Pediatr 1993;39:445).
Studies of transfusion-acquired AIDS cases have repeatedly led to the discovery of HIV in the patient as well as in the blood donor.
Numerous studies have shown an almost perfect correlation between the occurrence of AIDS in a blood recipient and donor, and evidence of homologous HIV strains in both the recipient and the donor (NIAID, 1995).
HIV is similar in genetic structure and morphology to other lentiviruses that often cause immunodeficiency in their animal hosts in addition to slow, progressive wasting disorders, neurodegeneration and death.
Like HIV in humans, animal viruses such as feline immunodeficiency virus (FIV) in cats, visna virus in sheep and simian immunodeficiency virus (SIV) in monkeys primarily infect cells of the immune system such as T cells and macrophages. For example, visna virus infects macrophages and causes a slowly progressive neurologic disease (Haase. Nature 1986;322:130).
HIV causes the death and dysfunction of CD4+ T lymphocytes in vitro and in vivo.
CD4+ T cell dysfunction and depletion are hallmarks of HIV disease. The recognition that HIV infects and destroys CD4+ T cells in vitro strongly suggests a direct link between HIV infection, CD4+ T cell depletion, and development of AIDS. A variety of mechanisms, both directly and indirectly related to HIV infection of CD4+ T cells, are likely responsible for the defects in CD4+ T cell function observed in HIV-infected people. Not only can HIV enter and kill CD4+ T cells directly, but several HIV gene products may interfere with the function of uninfected cells (NIAID, 1995; Pantaleo et al. NEJM 1993;328:327).
ANSWERING THE SKEPTICS:
RESPONSES TO ARGUMENTS THAT HIV DOES NOT CAUSE AIDS
MYTH: HIV antibody testing is unreliable.
FACT: Diagnosis of infection using antibody testing is one of the best-established concepts in medicine. HIV antibody tests exceed the performance of most other infectious disease tests in both sensitivity (the ability of the screening test to give a positive finding when the person tested truly has the disease ) and specificity (the ability of the test to give a negative finding when the subjects tested are free of the disease under study). Current HIV antibody tests have sensitivity and specificity in excess of 98% and are therefore extremely reliable (WHO, 1998; Sloand et al. JAMA 1991;266:2861).
Progress in testing methodology has also enabled detection of viral genetic material, antigens and the virus itself in body fluids and cells. While not widely used for routine testing due to high cost and requirements in laboratory equipment, these direct testing techniques have confirmed the validity of the antibody tests (Jackson et al. J Clin Microbiol 1990;28:16; Busch et al. NEJM 1991;325:1; Silvester et al. J Acquir Immune Defic Syndr Hum Retrovirol 1995;8:411; Urassa et al. J Clin Virol 1999;14:25; Nkengasong et al. AIDS 1999;13:109; Samdal et al. Clin Diagn Virol 1996;7:55.
MYTH: There is no AIDS in Africa. AIDS is nothing more than a new name for old diseases.
FACT: The diseases that have come to be associated with AIDS in Africa - such as wasting syndrome, diarrheal diseases and TB - have long been severe burdens there. However, high rates of mortality from these diseases, formerly confined to the elderly and malnourished, are now common among HIV-infected young and middle-aged people, including well-educated members of the middle class (UNAIDS, 2000).
For example, in a study in Cote d'Ivoire, HIV-seropositive individuals with pulmonary tuberculosis (TB) were 17 times more likely to die within six months than HIV-seronegative individuals with pulmonary TB (Ackah et al. Lancet 1995; 345:607). In Malawi, mortality over three years among children who had received recommended childhood immunizations and who survived the first year of life was 9.5 times higher among HIV-seropositive children than among HIV-seronegative children. The leading causes of death were wasting and respiratory conditions (Taha et al. Pediatr Infect Dis J 1999;18:689). Elsewhere in Africa, findings are similar.
MYTH: HIV cannot be the cause of AIDS because researchers are unable to explain precisely how HIV destroys the immune system.
FACT: A great deal is known about the pathogenesis of HIV disease, even though important details remain to be elucidated. However, a complete understanding of the pathogenesis of a disease is not a prerequisite to knowing its cause. Most infectious agents have been associated with the disease they cause long before their pathogenic mechanisms have been discovered. Because research in pathogenesis is difficult when precise animal models are unavailable, the disease-causing mechanisms in many diseases, including tuberculosis and hepatitis B, are poorly understood. The critics' reasoning would lead to the conclusion that M. tuberculosis is not the cause of tuberculosis or that hepatitis B virus is not a cause of liver disease (Evans. Yale J Biol Med 1982;55:193).
MYTH: AZT and other antiretroviral drugs, not HIV, cause AIDS.
FACT: The vast majority of people with AIDS never received antiretroviral drugs, including those in developed countries prior to the licensure of AZT in 1987, and people in developing countries today where very few individuals have access to these medications (UNAIDS, 2000).
As with medications for any serious diseases, antiretroviral drugs can have toxic side effects. However, there is no evidence that antiretroviral drugs cause the severe immunosuppression that typifies AIDS, and abundant evidence that antiretroviral therapy, when used according to established guidelines, can improve the length and quality of life of HIV-infected individuals.
In the 1980s, clinical trials enrolling patients with AIDS found that AZT given as single-drug therapy conferred a modest (and short-lived) survival advantage compared to placebo. Among HIV-infected patients who had not yet developed AIDS, placebo-controlled trials found that AZT given as single-drug therapy delayed, for a year or two, the onset of AIDS-related illnesses. Significantly, long-term follow-up of these trials did not show a prolonged benefit of AZT, but also never indicated that the drug increased disease progression or mortality. The lack of excess AIDS cases and death in the AZT arms of these placebo-controlled trials effectively counters the argument that AZT causes AIDS (NIAID, 1995).
Subsequent clinical trials found that patients receiving two-drug combinations had up to 50 percent increases in time to progression to AIDS and in survival when compared to people receiving single-drug therapy. In more recent years, three-drug combination therapies have produced another 50 percent to 80 percent improvements in progression to AIDS and in survival when compared to two-drug regimens in clinical trials (HHS, 2004). Use of potent anti-HIV combination therapies has contributed to dramatic reductions in the incidence of AIDS and AIDS-related deaths in populations where these drugs are widely available, an effect which clearly would not be seen if antiretroviral drugs caused AIDS (Figure 1; CDC. HIV AIDS Surveillance Report 1999;11:1; Palella et al. NEJM 1998;338:853; Mocroft et al. Lancet 1998;352:1725; Mocroft et al. Lancet 2000;356:291; Vittinghoff et al. J Infect Dis 1999;179:717; Detels et al. JAMA 1998;280:1497; de Martino et al. JAMA 2000;284:190; CASCADE Collaboration. Lancet 2000;355:1158; Hogg et al. CMAJ 1999;160:659; Schwarcz et al. Am J Epidemiol 2000;152:178; Kaplan et al. Clin Infect Dis 2000;30:S5; McNaghten et al. AIDS 1999;13:1687).
MYTH: Behavioral factors such as recreational drug use and multiple sexual partners account for AIDS.
FACT: The proposed behavioral causes of AIDS, such as multiple sexual partners and long-term recreational drug use, have existed for many years. The epidemic of AIDS, characterized by the occurrence of formerly rare opportunistic infections such as Pneumocystis carinii pneumonia (PCP) did not occur in the United States until a previously unknown human retrovirus - HIV - spread through certain communities (NIAID, 1995a; NIAID, 1995b).
Compelling evidence against the hypothesis that behavioral factors cause AIDS comes from recent studies that have followed cohorts of homosexual men for long periods of time and found that only HIV-seropositive men develop AIDS.
For example, in a prospectively studied cohort in Vancouver, 715 homosexual men were followed for a median of 8.6 years. Among 365 HIV-positive individuals, 136 developed AIDS. No AIDS-defining illnesses occurred among 350 seronegative men despite the fact that these men reported appreciable use of inhalable nitrites ("poppers") and other recreational drugs, and frequent receptive anal intercourse (Schechter et al. Lancet 1993;341:658).
Other studies show that among homosexual men and injection-drug users, the specific immune deficit that leads to AIDS - a progressive and sustained loss of CD4+ T cells - is extremely rare in the absence of other immunosuppressive conditions. For example, in the Multicenter AIDS Cohort Study, more than 22,000 T-cell determinations in 2,713 HIV-seronegative homosexual men revealed only one individual with a CD4+ T-cell count persistently lower than 300 cells/mm3 of blood, and this individual was receiving immunosuppressive therapy (Vermund et al. NEJM 1993;328:442).
In a survey of 229 HIV-seronegative injection-drug users in New York City, mean CD4+ T-cell counts of the group were consistently more than 1000 cells/mm3 of blood. Only two individuals had two CD4+ T-cell measurements of less than 300/mm3 of blood, one of whom died with cardiac disease and non-Hodgkin's lymphoma listed as the cause of death (Des Jarlais et al. J Acquir Immune Defic Syndr 1993;6:820).
MYTH: AIDS among transfusion recipients is due to underlying diseases that necessitated the transfusion, rather than to HIV.
FACT: This notion is contradicted by a report by the Transfusion Safety Study Group (TSSG), which compared HIV-negative and HIV-positive blood recipients who had been given transfusions for similar diseases. Approximately 3 years after the transfusion, the mean CD4+ T-cell count in 64 HIV-negative recipients was 850/mm3 of blood, while 111 HIV-seropositive individuals had average CD4+ T-cell counts of 375/mm3 of blood. By 1993, there were 37 cases of AIDS in the HIV-infected group, but not a single AIDS-defining illness in the HIV-seronegative transfusion recipients (Donegan et al. Ann Intern Med 1990;113:733; Cohen. Science 1994;266:1645).
MYTH: High usage of clotting factor concentrate, not HIV, leads to CD4+ T-cell depletion and AIDS in hemophiliacs.
FACT: This view is contradicted by many studies. For example, among HIV-seronegative patients with hemophilia A enrolled in the Transfusion Safety Study, no significant differences in CD4+ T-cell counts were noted between 79 patients with no or minimal factor treatment and 52 with the largest amount of lifetime treatments. Patients in both groups had CD4+ T cell-counts within the normal range (Hasset et al. Blood 1993;82:1351). In another report from the Transfusion Safety Study, no instances of AIDS-defining illnesses were seen among 402 HIV-seronegative hemophiliacs who had received factor therapy (Aledort et al. NEJM 1993;328:1128).
In a cohort in the United Kingdom, researchers matched 17 HIV-seropositive hemophiliacs with 17 HIV-seronegative hemophiliacs with regard to clotting factor concentrate usage over a ten-year period. During this time, 16 AIDS-defining clinical events occurred in 9 patients, all of whom were HIV-seropositive. No AIDS-defining illnesses occurred among the HIV-negative patients. In each pair, the mean CD4+ T cell count during follow-up was, on average, 500 cells/mm3 lower in the HIV-seropositive patient (Sabin et al. BMJ 1996;312:207).
Among HIV-infected hemophiliacs, Transfusion Safety Study investigators found that neither the purity nor the amount of Factor VIII therapy had a deleterious effect on CD4+ T cell counts (Gjerset et al., Blood 1994;84:1666). Similarly, the Multicenter Hemophilia Cohort Study found no association between the cumulative dose of plasma concentrate and incidence of AIDS among HIV-infected hemophiliacs (Goedert et al. NEJM 1989;321:1141.).
MYTH: The distribution of AIDS cases casts doubt on HIV as the cause. Viruses are not gender-specific, yet only a small proportion of AIDS cases are among women.
FACT: The distribution of AIDS cases, whether in the United States or elsewhere in the world, invariably mirrors the prevalence of HIV in a population. In the United States, HIV first appeared in populations of homosexual men and injection-drug users, a majority of whom are male. Because HIV is spread primarily through sex or by the exchange of HIV-contaminated needles during injection-drug use, it is not surprising that a majority of U.S. AIDS cases have occurred in men (U.S. Census Bureau, 1999; UNAIDS, 2000).
Increasingly, however, women in the United States are becoming HIV-infected, usually through the exchange of HIV-contaminated needles or sex with an HIV-infected male. The CDC estimates that 30 percent of new HIV infections in the United States in 1998 were in women. As the number of HIV-infected women has risen, so too has the number of female AIDS patients in the United States. Approximately 23 percent of U.S. adult/adolescent AIDS cases reported to the CDC in 1998 were among women. In 1998, AIDS was the fifth leading cause of death among women aged 25 to 44 in the United States, and the third leading cause of death among African-American women in that age group (NIAID Fact Sheet: HIV/AIDS Statistics).
In Africa, HIV was first recognized in sexually active heterosexuals, and AIDS cases in Africa have occurred at least as frequently in women as in men. Overall, the worldwide distribution of HIV infection and AIDS between men and women is approximately 1 to 1 (U.S. Census Bureau, 1999; UNAIDS, 2000).
MYTH: HIV cannot be the cause of AIDS because the body develops a vigorous antibody response to the virus.
FACT: This reasoning ignores numerous examples of viruses other than HIV that can be pathogenic after evidence of immunity appears. Measles virus may persist for years in brain cells, eventually causing a chronic neurologic disease despite the presence of antibodies. Viruses such as cytomegalovirus, herpes simplex and varicella zoster may be activated after years of latency even in the presence of abundant antibodies. In animals, viral relatives of HIV with long and variable latency periods, such as visna virus in sheep, cause central nervous system damage even after the production of antibodies (NIAID, 1995).
Also, HIV is well recognized as being able to mutate to avoid the ongoing immune response of the host (Levy. Microbiol Rev 1993;57:183).
MYTH: Only a small number of CD4+ T cells are infected by HIV, not enough to damage the immune system.
FACT: New techniques such as the polymerase chain reaction (PCR) have enabled scientists to demonstrate that a much larger proportion of CD4+ T cells are infected than previously realized, particularly in lymphoid tissues. Macrophages and other cell types are also infected with HIV and serve as reservoirs for the virus. Although the fraction of CD4+ T cells that is infected with HIV at any given time is never extremely high (only a small subset of activated cells serve as ideal targets of infection), several groups have shown that rapid cycles of death of infected cells and infection of new target cells occur throughout the course of disease (Richman J Clin Invest 2000;105:565).
MYTH: HIV is not the cause of AIDS because many individuals with HIV have not developed AIDS.
FACT: HIV disease has a prolonged and variable course. The median period of time between infection with HIV and the onset of clinically apparent disease is approximately 10 years in industrialized countries, according to prospective studies of homosexual men in which dates of seroconversion are known. Similar estimates of asymptomatic periods have been made for HIV-infected blood-transfusion recipients, injection-drug users and adult hemophiliacs (Alcabes et al. Epidemiol Rev 1993;15:303).
As with many diseases, a number of factors can influence the course of HIV disease. Factors such as age or genetic differences between individuals, the level of virulence of the individual strain of virus, as well as exogenous influences such as co-infection with other microbes may determine the rate and severity of HIV disease expression. Similarly, some people infected with hepatitis B, for example, show no symptoms or only jaundice and clear their infection, while others suffer disease ranging from chronic liver inflammation to cirrhosis and hepatocellular carcinoma. Co-factors probably also determine why some smokers develop lung cancer while others do not (Evans. Yale J Biol Med 1982;55:193; Levy. Microbiol Rev 1993;57:183; Fauci. Nature 1996;384:529).
MYTH: Some people have many symptoms associated with AIDS but do not have HIV infection.
FACT: Most AIDS symptoms result from the development of opportunistic infections and cancers associated with severe immunosuppression secondary to HIV.
However, immunosuppression has many other potential causes. Individuals who take glucocorticoids and/or immunosuppressive drugs to prevent transplant rejection or for autoimmune diseases can have increased susceptibility to unusual infections, as do individuals with certain genetic conditions, severe malnutrition and certain kinds of cancers. There is no evidence suggesting that the numbers of such cases have risen, while abundant epidemiologic evidence shows a staggering rise in cases of immunosuppression among individuals who share one characteristic: HIV infection (NIAID, 1995; UNAIDS, 2000).
MYTH: The spectrum of AIDS-related infections seen in different populations proves that AIDS is actually many diseases not caused by HIV.
FACT: The diseases associated with AIDS, such as PCP and Mycobacterium avium complex (MAC), are not caused by HIV but rather result from the immunosuppression caused by HIV disease. As the immune system of an HIV-infected individual weakens, he or she becomes susceptible to the particular viral, fungal and bacterial infections common in the community. For example, HIV-infected people in certain midwestern and mid-Atlantic regions are much more likely than people in New York City to develop histoplasmosis, which is caused by a fungus. A person in Africa is exposed to different pathogens than is an individual in an American city. Children may be exposed to different infectious agents than adults (USPHS/IDSA, 2001).
More information on this issue is available on the NIAID Focus On the HIV-AIDS Connection web page.
NIAID is a component of the National Institutes of Health (NIH), which is an agency of the Department of Health and Human Services. NIAID supports basic and applied research to prevent, diagnose, and treat infectious and immune-mediated illnesses, including HIV/AIDS and other sexually transmitted diseases, illness from potential agents of bioterrorism, tuberculosis, malaria, autoimmune disorders, asthma and allergies.
News releases, fact sheets and other NIAID-related materials are available on the NIAID Web site at http://www.niaid.nih.gov.
Office of Communications and Public Liaison
National Institute of Allergy and Infectious Diseases
National Institutes of Health
Bethesda, MD 20892
This article was created in 1994, last updated on February 27, 2003,
and posted to this Web site on September 26, 2005. | 1 | 6 |
<urn:uuid:7b5b89d3-26f0-47ba-998e-4e2b2b38767f> | The expected central targets for pruritoceptors, a subset of C fibers, are dorsal horn neurons located in the upper laminae of the spinal cord. Studies have investigated their potential role as second-order neurons in the itch circuit.
Simone et al. (2004)
saw that most monkey spinothalamic tract (STT) neurons were capsaicin sensitive and a subset of these also responded to histamine. Work in rat (Jinks and Carstens 2002
) found capsaicin also activated dorsal horn neurons responding to the itch mediator serotonin 5HT. Experiments in mouse showed that itch-selective spinal neurons, activated by histamine and 5HT as well as SLIGRL-NH2
, an agonist for the PAR2 receptor, responded to nociceptive stimuli including heat and mustard oil (Akiyama et al., 2009a
). The inhibition of itch by pain was shown in monkey by Davidson et al. (2007a)
, who observed that peripheral scratching inhibited histamine- or cowhage-sensitive STT neurons. They later found a group of histamine- and capsaicin-responsive cells that were inhibited by scratching upon application of the former but not latter compound (Davidson et al., 2009
). These data from different animal models suggest the dual activation of neurons by either itchy or painful stimuli occurs centrally as predicted by selectivity theory. However, there is still debate about the applicability of selectivity versus labeled line theory to itch circuitry in the spinal cord.
In 2001, an STT population responsive to histamine was seen in cat spinal cord, providing early evidence for a potential itch-specific population (Andrew and Craig, 2001
). Certain STT neurons responded only to histamine and not mustard oil, a painful stimulus. The later discovery of a role for the gastrin-releasing peptide receptor (GRPR) in itch (Sun and Chen, 2007
; Sun et al., 2009
) may also mark an itch-specific subset. This receptor is found in the spinal cord dorsal horn, and GRP, the putative ligand, is expressed in some small-diameter DRG neurons. Evoked nonhistaminergic itch, but not pain, is significantly decreased in GRPR mutant mice. When GRPR+
cells are ablated, both histaminergic and nonhistaminergic itch behavior are lost, whereas pain sensitivity is intact. Importantly, in GRPR-ablated lamina I dorsal horn, NK-1+
neurons, most of which are STT neurons required for both pain and itch behavior (see below), are still present, suggesting that GRPR+
and STT neurons comprise two separate populations. The GRPR+
neurons are candidates for the itch-specific spinal cord neurons postulated by labeled line theory. Electrophysiological recordings of these neurons demonstrating that they respond to itchy but not painful stimuli are needed to rule out the possibility that GRPR+
cells play a dispensable role in the pain pathway. It is also important to determine whether GRPR+
cells are projection neurons or interneurons.
Another dorsal horn population expresses NK-1, a G protein-coupled receptor (GPCR) demonstrated to play a critical role in itch produced by 5HT in rat (Carstens et al., 2010
). Removal of NK-1+
neurons also leads to pain deficits (Mantyh et al., 1997
; Nichols et al., 1999
), suggesting this cell population contains the putative pruritoceptors of the selectivity model while also contributing to nociception. It will be interesting to see whether NK-1 overlaps with GRPR in the rat spinal cord and, conversely, whether NK-1 ablation in mouse matches the rat phenotype.
This also raises the important issue of species differences in pruritoception. Even rats, the closest model system to mice, are unique in that histamine does not induce a behavioral scratch response (Jinks and Carstens 2002
). At the level of both DRG (Johanek et al., 2008
; Namer et al., 2008
) and spinal cord (Davidson et al., 2007b
) in humans and nonhuman primates, there seem to be separate C fibers for histaminergic and nonhistaminergic (namely cowhage-induced) itch. Mice, however, demonstrate some overlap with respect to activation by these compounds (Akiyama et al., 2009a
), highlighting another important species difference that must be acknowledged.
The role of spinal interneurons has been the subject of limited study, but a paper from Ross et al. (2010)
identifies a subset of these cells involved in itch. The transcription factor Bhlhb5 is required for development of some dorsal horn neurons. Ablation of this gene from an inhibitory interneuron population marked by Pax2 leads to the development of skin lesions at ~2 months of age in these mice. These results, which parallel those of the VGLUT2 studies, offer a place for spinal interneurons in the itch circuit and call for their further investigation.
A preliminary picture of the pain and itch circuitry emerges from the collective DRG and spinal cord data (). This shows how the selectivity model can explain itch and specifies the molecular identity of certain critical neurons in the pathway. | 1 | 4 |
<urn:uuid:6c6e772f-dbf5-4049-85a3-26f51c370740> | conservation news and environmental science news.
Humans worsening the spread of forest-killing disease in California
(08/15/2007) The spread of Sudden Oak Death, a disease that is rapidly killing forests in the western United States, is being worsened by human activities, report studies recently published in the Journal of Ecology and Ecological Applications.
Climate change reducing Lake Tahoe's water clarity
(08/15/2007) Lake Tahoe in Northern California is losing is characteristic water clarity due to pollution and climate change, reports a new study by the University of California at Davis.
Group seeks salvation for 189 endangered bird species
(08/15/2007) BirdLife International has launched an appeal to save 189 endangered bird species over the next 5 years. The U.K.-based conservation group is seeking to raise tens of millions of dollars through its Species Champions initiative, by finding "Species Champions" among individuals, private foundations, and companies who will fund the work of identified "Species Guardians" for each bird.
conservation more effective than biofuels for fighting global warming
(08/15/2007) Conserving forests and grasslands may be a more effective land-use strategy for fighting climate change than growing biofuel crops argues a new paper published in the journal Science. Comparing emissions from various fuel crops versus carbon storage in natural ecosystems, Renton Righelato and Dominick Spracklen write that "forestation of an equivalent area of land would sequester two to nine times more carbon over a 30-year period than the emissions avoided by the use of the biofuel."
Earthquakes can break speed limit
(08/15/2007) Earthquakes can move faster than previously thought with rupture rates well exceeding the conventional 3 kilometeres per second, reports Oxford University professor Shamita Das writing in the journal Science. The finding suggests that earthquakes in the world's largest quake zones may be capable of more destruction than earlier projections.
Arctic sea ice shrinks to record low in 2007
(08/15/2007) Arctic sea ice has shrunk to a record low according the Japan Aerospace Exploration agency.
Antarctic Bottom Water has warmed since 1992
(08/14/2007) Deep ocean waters near Antarctica have warmed significantly since 1992, though variable temperatures may it difficult to determine whether it is a trend, reports a new study published in Geophysical Research Letters.
Geoengineering cure for global warming could cause problems
(08/14/2007) Proposed geoengineering schemes to reduce global warming may do more harm than good, warns a new study published in Geophysical Research Letters.
2004 Indian Ocean tsunami waves hit Florida, Maine
(08/14/2007) Waves from the devastating December 2004 tsunami were recorded along the Atlantic coast of North America, reports a new study published in Geophysical Research Letters.
Legless lizard retracts eyes to avoid retaliatory prey bites
(08/14/2007) For creatures without legs, snakes are remarkable predators. Pythons can capture and eat animals well over twice their size, while a mere drop of venom injected by an Australian death adder can kill a person. Scientists believe the main purpose for these adaptations is to help snakes avoid injury when pursuing and eating prey. However, snakes are not the only legless reptiles -- there are more than a dozen species of legless lizard distributed around the world. A new paper examines how these reptiles subdue their prey without venom or constriction.
Squirrels communicate with rattlesnakes using heated tail
(08/13/2007) Ground squirrels heat their tails to defend their young against predatory rattlesnakes, reports a study published in the early online edition of Proceedings of the Natural Academy of Sciences (PNAS).
Failing water supply destroyed lost city of Angkor Wat
(08/13/2007) The ancient city of Angkor in Cambodia was larger in extent than previously thought and fed by a single water system, according to a new map published by an international team of researchers. The study, published in the early online edition of the journal Proceedings of the Natural Academy of Sciences, suggests that the urban settlement sustained an elaborate water management network extending over more than 1,0000 square kilometers.
New flycatcher bird species discovered in Peru
(08/13/2007) Scientists have discovered a previously unknown species of bird in dense bamboo thickets in the Peruvian Amazon.
Low deforestation countries to see least benefit from carbon trading
(08/13/2007) Countries that have done the best job protecting their tropical forests stand to gain the least from proposed incentives to combat global warming through carbon offsets, warns a new study published in Tuesday in the journal Public Library of Science Biology (PLoS). The authors say that "high forest cover with low rates of deforestation" (HFLD) nations "could become the most vulnerable targets for deforestation if the Kyoto Protocol and upcoming negotiations on carbon trading fail to include intact standing forest."
Amazon deforestation in Brazil falls 29% for 2007
(08/13/2007) Deforestation in the Brazilian Amazon fell 29 percent for the 2006-2007 year, compared with the prior period. The loss of 3,863 square miles (10,010 square kilometers) of rainforest was the lowest since the Brazilian government started tracking deforestation on a yearly basis in 1988.
Global warming to stunt growth of rainforest trees
(08/12/2007) Global warming could reduce the growth rates of rainforest trees by 50 percent, reported research presented last week at the annual meeting of the Ecological Society of America in San Jose, California by Ken Feeley of Harvard University's Arnold Arboretum in Boston.
Scientists: Newsweek Erred in Global Warming Coverage
(08/12/2007) A statement from the University of Alabama argues that a recent Newsweek cover story on climate change made two important mistakes.
Climate change claims a snail
(08/12/2007) The Aldabra banded snail (Rachistia aldabrae), a rare and poorly known species found only on Aldabra atoll in the Indian Ocean, has apparently gone extinct due to declining rainfall in its niche habitat. While some may question lamenting the loss of a lowly algae-feeding gastropod on some unheard of chain of tropical islands, its unheralded passing is nevertheless important for the simple reason that Rachistia aldabrae may be a pioneer. As climate change increasingly brings local and regional shifts in precipitation and temperature, other species are expected to follow in its path.
Controversy over flawed NASA climate data changes little
(08/11/2007) NASA corrected an error on its U.S. air temperature data after a blogger, Steve McIntyre of Climate Audit, discovered a discrepancy for the years 2000-2006. The revised figures show that 1934, not 1998, was America's hottest year on record. The change has little affect on global temperature records and the average temperatures for 2001-2006 (at 0.66 C) is still warmer than 1930-1934 (0.63 C) in the United States.
European heat waves double in length since 1880
(08/11/2007) The most accurate measures of European daily temperatures ever indicate that the length of heat waves on the continent has doubled and the frequency of extremely hot days has nearly tripled in the past century. The new data shows that many previous assessments of daily summer temperature change underestimated heat wave events in western Europe by approximately 30 percent.
Floods affect 500 million people per year, will worsen with warming
(08/10/2007) Floods affect 500 million people a year and cause billions of dollars in damage, said U.N. officials Thursday.
New shrew species, orchid discovered in the Philippines
(08/10/2007) An unknown shrew species has been discovered on Palawan, a large island in the Philippines, by a conservation International-led expedition.
Papua seeks funds for fighting global warming through forest conservation
(08/10/2007) In an article published today in The Wall Street Journal, Tom Wright profiles the nascent "avoided deforestation" carbon offset market in Indonesia's Papua province. Barnabas Suebu, governor of the province which makes up nearly half the island of New Guinea, has teamed with an Australian millionaire, Dorjee Sun, to develop a carbon offset plan that would see companies in developing countries pay for forest preservation in order to earn carbon credits. Compliance would be monitored via satellite.
U.N. sends team to investigate gorilla killings
(08/10/2007) The U.N. said it will send a team of experts to probe the killings of critically endangered mountain gorillas in the Democratic Republic of the Congo (DRC). Four gorillas were shot "execution-style" last month, while three others have been killed so far this year. Rangers believe illegal charcoal harvesters from Goma are to blame.
Melting permafrost affects greenhouse gas emissions
(08/10/2007) Permafrost -- the perpetually frozen foundation of the north -- isn't so permanent anymore, and scientists are scrambling to understand the pros and cons when terra firma goes soft.
Apple comes up a bit short on eco-credentials of new iMac
(08/10/2007) While Apple has touted the environmental attributes of its newest iMac, critics say the new computer failed to live up to the company's goals for the use of mercury, reports the San Jose Mercury. In May, Apple said it would eventually replace mercury-containing fluorescent backlights in its LCD monitors with LEDs backlights, but the new computers don't use the new technology. The company said it still face technological hurdles in rolling out the new LCDs.
Floating sea ice shrinks in the Arctic
(08/10/2007) By one estimate, the extent of floating sea ice in the Arctic has shrunk more than in any summer ever recorded, reports the New York Times.
Temperate forests not a fix for global warming
(08/10/2007) Carbon sequestration projects in temperate regions -- already facing doubts by scientists -- were dealt another blow by Duke University-led research that found pine tree stands grown under elevated carbon dioxide conditions only store significant amounts of carbon when they receive sufficient amounts of water and nutrients.
U.S. government weather agency cuts hurricane outlook
(08/10/2007) The U.S. National Oceanic and Atmospheric Administration on Thursday reduced its forecast for the number of tropical storms and hurricanes expected during the 2007 Atlantic season. NOAA said it now expected between 13 and 16 named storms, with seven to nine becoming hurricanes and three to five of them classified as "major" hurricanes (categories 3, 4, or 5).
Amazon deforestation rate falls to lowest on record
(08/10/2007) Deforestation rates in the Brazilian Amazon for the previous year were the lowest on record, according to preliminary figures released by INPE, Brazil's National Institute of Space Research.
New Park in Argentina Protects 500,000 Penguins
(08/09/2007) The government of Argentina will create a new marine park along the coast of Patagonia, reports the Bronx Zoo-based Wildlife conservation Society. Located in Golfo San Jorge, the park will protect more than half a million penguins and other rare seabirds.
Industrial pollution has caused Arctic warming since 1880s
(08/09/2007) Industrial soot emissions have been warming the Arctic since at the least the 1880s, reports a new study that examined "black carbon" levels in the Greenland ice sheet over the past 215 years. The research is published in current issue of the journal Science.
Global warming will slow, then accelerate reports ground-breaking model
(08/09/2007) Global warming will slow during the next few years but then accelerate with at least half of the years after 2009 warmer than 1998, the warmest year on record, reports a new study that is the first to incorporate information about the actual state of the ocean and the atmosphere, rather than the approximate ones most models use. The research, published by a team of scientists from the Hadley Center in the United Kingdom, appears in the current issue of the journal Science.
Experts: parks effectively protect rainforest in Peru
(08/09/2007) High-resolution satellite monitoring of the Amazon rainforest in Peru shows that land-use and conservation policies have had a measurable impact on deforestation rates. The research is published in the August 9, 2007, on-line edition of Science Express.
Wild ferrets, America's most endangered mammal, recover
(08/09/2007) Black-footed ferrets (Mustela nigripes), North America's most endangered mammal species, are recovering in their native Wyoming, reports a study published in the current issue of the journal Science.
Internet drives trafficking of endangered species
(08/09/2007) It's true, said U.S. Fish and Wildlife Service Special Agent Ed Newcomer, that the Internet has made wildlife crime easier, and easier to hide. But it's also made it easier for wildlife law enforcement agents to pose as potential customers -- and to catch people.
Lowland rainforest less diverse than previously thought
(08/09/2007) While rainforests are the world's libraries of biodiversity, species richness may be more evenly distributed in some forests than in others, reports an extensive new study by an international team of entomologists and botanists. The work, published in the current issue of the journal Nature, has important implications for forest management and conservation strategies.
Organic, shade grown cacao good for birds
(08/09/2007) Bird diversity in cacao farms in Panama is considerably higher when crops are grown in the shade of canopy trees, reports a study published earlier this year in Biodiversity conservation. The research has implications for biodiversity conservation and the sustainability of cacao plantations.
Dr. Marc Van Roosmalen, discover of unknown monkey species, freed in Brazil
(08/08/2007) Dr. Marc van Roosmalen, a renowned primatologist who has discovered seven species of monkeys in the Amazon rainforest, has been freed in Brazil. Dr. van Roosmalen had been charged with illegally keeping wild animals and embezzlement and sentenced to nearly 16 years in prison in a case that was widely criticized by scientists.
Ethnobotanist honored for contributions to wilderness medicine
(08/08/2007) Renowned ethnobotanist and conservationist Dr. Mark Plotkin of the Amazon conservation Team was honored Wednesday with the 2007 Paul S. Auerbach Award, a distinction awarded by the Wilderness Medical Society (WMS).
Coral reefs declining faster than rainforests
(08/08/2007) Coral reefs in the Pacific Ocean are dying faster than previously thought due to costal development, climate change, and disease, reports a study published Wednesday in the online journal PLoS One. Nearly 600 square miles of reef have disappeared per year since the late 1960s, a rate twice that of tropical rainforest loss.
7.4 magnitude earthquake hits Indonesia
(08/08/2007) A 7.4 magnitude earthquake hit Indonesia's West Java on Thursday, causing widespread panic according to Reuters. There are no immediate reports of damage or casualties.
Extinction of the Yangtze river dolphin is confirmed
(08/08/2007) After an extensive six-week search scientists have confirmed the probable extinction of the baiji or Yangtze river dolphin. The freshwater dolphin's extinction had been reported late last year.
Primatologist freed but questions remain for Brazil after "attack on science"
(08/08/2007) While primatologist Dr. Marc van Roosmalen has been freed from prison pending appeal, prominent scientists had stinging criticism for the Brazilian government over its increasingly "hostile" treatment of researchers. Before Roosmalen was released Tuesday, some scientists even threatened "civil disobedience," according to a report in the journal Nature.
100 years ago: oil shortages spur need for alternative fuels
(08/08/2007) The fuels committee of the Motor Union of Great Britain and Ireland has issued a valuable report on motor-car fuels... a famine in petrol appears to be inevitable in the near future, owing to the fact that demand is increasing at a rate much greater than the rate of increase of the supply. In 1904 the consumption of petrol in the United Kingdom was 12,000,000 gallons; in 1907 it had risen to 27,000,000 gallons... the committee discusses in the report other possible fuels. The supply is divided into two parts. The first includes all fuels limited in quantity...The second group contains one item only - alcohol - and it is evident from the whole tone of the report that the committee expects to find in denatured vegetable spirits the fuel of the future.
Economics of next generation biofuels
(08/08/2007) 'Second generation' biorefineries -- those making biofuel from lignocellulosic feedstocks like straw, grasses and wood -- have long been touted as the successor to today's grain ethanol plants, but until now the technology has been considered too expensive to compete. However, recent increases in grain prices mean that production costs are now similar for grain ethanol and second generation biofuels, according to a paper published in the first edition of Biofuels, Bioproducts & Biorefining.
Rare pygmy elephants endangered by logging in Borneo
(08/08/2007) Pygmy elephants are increasingly threatened by logging and forest conversion for agriculture in their native Borneo, reports a new satellite tracking study by WWF.
Afghanistan's recovery effort drives poaching of rare wildlife
(08/07/2007) Few people associate Afghanistan with wildlife and it would come as a surprise to many that the war-torn, but fledging democracy is home to snow leopards, Persian leopards, five species of bush dog, Marco Polo Sheep, Asiatic Black Bear, Brown Bears, Striped Hyenas, and numerous bird of prey species. While much of this biodiversity has survived despite years of civil strife, Afghanistan's wildlife faces new pressures from the very people who are charged with rebuilding the country: contractors and the development community are driving the trade in rare and endangered wildlife. This development, coupled with lack of laws regulating resource management and growing instability, complicate efforts to protect the country's wildlife. Working to address these challenges is Dr. Alex Dehgan, Afghanistan Country Director for the Wildlife conservation Society (WCS). WCS is working to implement the Afghanistan Biodiversity conservation Program, a three-year project funded by the US Agency for International Development to promote wildlife and resource conservation in the country.
U.S. court blocks sonar testing to protect whales
(08/07/2007) A U.S. federal court blocked the Navy from using a type of sonar that environmentalists say pose a threat to whales off the coast of California. The judge noted that the Navy's own analyses concluded that the Southern California exercises "will cause widespread harm to nearly thirty species of marine mammals, including five species of endangered whales, and may cause permanent injury and death" and characterized the Navy's proposed mitigation measures as "woefully inadequate and ineffectual."
New species discovered in "lost" African forest
(08/07/2007) Scientists have discovered several unknown species during an expedition to a forest that has been off-limits to researcher for nearly 50 years due to civil strife.
Page 1 | Page 2 | Page 3 | Page 4 | Page 5 | Page 6 | Page 7 | Page 8 | Page 9 | Page 10 | Page 11 | Page 12 | Page 13 | Page 14 | Page 15 | Page 16 | Page 17 | Page 18 | Page 19 | Page 20 | Page 21 | Page 22 | Page 23 | Page 24 | Page 25 | Page 26 | Page 27 | Page 28 | Page 29 | Page 30 | Page 31 | Page 32 | Page 33 | Page 34 | Page 35 | Page 36 | Page 37 | Page 38 | Page 39 | Page 40 | Page 41 | Page 42 | Page 43 | Page 44 | Page 45 | Page 46 | Page 47 | Page 48 | Page 49 | Page 50 | Page 51 | Page 52 | Page 53 | Page 54 | Page 55 | Page 56 | Page 57 | Page 58 | Page 59 | Page 60 | Page 61 | Page 62 | Page 63 | Page 64 | Page 65 | Page 66 | Page 67 | Page 68 | Page 69 | Page 70 | Page 71 | Page 72 | Page 73 | Page 74 | Page 75 | Page 76 | Page 77 | Page 78 | Page 79 | Page 80 | Page 81 | Page 82 | Page 83 | Page 84 | Page 85 | Page 86 | Page 87 | Page 88 | Page 89 | Page 90 | Page 91 | Page 92 | Page 93 | Page 94 | Page 95 | Page 96 | Page 97 | Page 98 | Page 99 | Page 100 | Page 101 | Page 102 | Page 103 | Page 104 | Page 105 | Page 106 | Page 107 | Page 108 | Page 109 | Page 110 | Page 111 | Page 112 | Page 113 | Page 114 | Page 115 | Page 116 | Page 117 | Page 118 | Page 119 | Page 120 | Page 121 | Page 122 | Page 123 | Page 124 | Page 125 | Page 126 | Page 127 | Page 128 | Page 129 | Page 130 | Page 131 | Page 132 | Page 133 | Page 134 | Page 135 | Page 136 | Page 137 | Page 138 | Page 139 | Page 140 | Page 141 | Page 142 | Page 143 | Page 144 | Page 145 | Page 146 | Page 147 | Page 148 | Page 149 | Page 150 | Page 151 | Page 152 | Page 153 | Page 154 | Page 155 | Page 156 | Page 157 | Page 158 | Page 159 | Page 160 | Page 161 | Page 162 | Page 163 | Page 164 | Page 165 | Page 166 | Page 167 | Page 168 | Page 169 | Page 170 | Page 171
News index | RSS | News Feed
Organic Apparel from Patagonia | Insect-repelling clothing
HIGH RESOLUTION PHOTOS / PRINTS
Copyright mongabay 2005-2013
Carbon dioxide gas emissions generated from mongabay.com operations (server, data transfer, travel) are mitigated through the purchase of REDD credits from Anthrotect,
an organization working with Afro-indigenous and Embera communities to protect forests in Colombia's Darien region.
Anthrotect is protecting the habitat of mongabay's mascot: the scale-crested pygmy tyrant. | 1 | 3 |
<urn:uuid:72fd8e2c-d74b-4f24-a7cd-a0e00a2e6ca7> | |Classification and external resources|
Eight women representing prominent mental diagnoses in the 19th century. (Armand Gautier)
A mental disorder or psychiatric disorder is a psychological pattern or anomaly, potentially reflected in behavior, that is generally associated with distress or disability, and which is not considered part of normal development in a person's culture. Mental disorders are generally defined by a combination of how a person feels, acts, thinks or perceives. This may be associated with particular regions or functions of the brain or rest of the nervous system, often in a social context. The recognition and understanding of mental health conditions have changed over time and across cultures and there are still variations in definition, assessment and classification, although standard guideline criteria are widely used. In many cases, there appears to be a continuum between mental health and mental illness, making diagnosis complex. According to the World Health Organisation (WHO), over a third of people in most countries report problems at some time in their life which meet criteria for diagnosis of one or more of the common types of mental disorder.
The causes of mental disorders are varied and in some cases unclear, and theories may incorporate findings from a range of fields. Services are based in psychiatric hospitals or in the community, and assessments are carried out by psychiatrists, clinical psychologists and clinical social workers, using various methods but often relying on observation and questioning. Clinical treatments are provided by various mental health professionals. Psychotherapy and psychiatric medication are two major treatment options, as are social interventions, peer support and self-help. In a minority of cases there might be involuntary detention or involuntary treatment, where legislation allows. Stigma and discrimination can add to the suffering and disability associated with mental disorders (or with being diagnosed or judged as having a mental disorder), leading to various social movements attempting to increase understanding and challenge social exclusion. Prevention is now appearing in some mental health strategies.
The definition and classification of mental disorders is a key issue for researchers as well as service providers and those who may be diagnosed. Most international clinical documents use the term mental "disorder", while "illness" is also common. It has been noted that using the term "mental" (i.e., of the mind) is not necessarily meant to imply separateness from brain or body.
There are currently two widely established systems that classify mental disorders;
- 'ICD-10 Chapter V: Mental and behavioural disorders, since 1949 part of the International Classification of Diseases produced by the WHO,
- the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) produced by the American Psychiatric Association (APA) since 1952.
Both these list categories of disorder and provide standardized criteria for diagnosis. They have deliberately converged their codes in recent revisions so that the manuals are often broadly comparable, although significant differences remain. Other classification schemes may be used in non-western cultures, for example the Chinese Classification of Mental Disorders, and other manuals may be used by those of alternative theoretical persuasions, for example the Psychodynamic Diagnostic Manual. In general, mental disorders are classified separately from neurological disorders, learning disabilities or mental retardation.
Unlike the DSM and ICD, some approaches are not based on identifying distinct categories of disorder using dichotomous symptom profiles intended to separate the abnormal from the normal. There is significant scientific debate about the relative merits of categorical versus such non-categorical (or hybrid) schemes, also known as continuum or dimensional models. A spectrum approach may incorporate elements of both.
In the scientific and academic literature on the definition or classification of mental disorder, one extreme argues that it is entirely a matter of value judgements (including of what is normal) while another proposes that it is or could be entirely objective and scientific (including by reference to statistical norms). Common hybrid views argue that the concept of mental disorder is objective even if only a "fuzzy prototype" that can never be precisely defined, or conversely that the concept always involves a mixture of scientific facts and subjective value judgments. Although the diagnostic categories are referred to as 'disorders', they are presented as medical diseases, but are not validated in the same way as most medical diagnoses. Some neurologists argue that classification will only be reliable and valid when based on neurobiological features rather than clinical interview, while others suggest that the differing ideological and practical perspectives need to be better integrated.
The DSM and ICD approach remains under attack both because of the implied causality model and because some researchers believe it better to aim at underlying brain differences which can precede symptoms by many years.
Anxiety or fear that interferes with normal functioning may be classified as an anxiety disorder. Commonly recognized categories include specific phobias, generalized anxiety disorder, social anxiety disorder, panic disorder, agoraphobia, obsessive-compulsive disorder and post-traumatic stress disorder.
Other affective (emotion/mood) processes can also become disordered. Mood disorder involving unusually intense and sustained sadness, melancholia, or despair is known as major depression (also known as unipolar or clinical depression). Milder but still prolonged depression can be diagnosed as dysthymia. Bipolar disorder (also known as manic depression) involves abnormally "high" or pressured mood states, known as mania or hypomania, alternating with normal or depressed mood. The extent to which unipolar and bipolar mood phenomena represent distinct categories of disorder, or mix and merge along a dimension or spectrum of mood, is subject to some scientific debate.
Patterns of belief, language use and perception of reality can become disordered (e.g., delusions, thought disorder, hallucinations). Psychotic disorders in this domain include schizophrenia, and delusional disorder. Schizoaffective disorder is a category used for individuals showing aspects of both schizophrenia and affective disorders. Schizotypy is a category used for individuals showing some of the characteristics associated with schizophrenia but without meeting cutoff criteria.
Personality—the fundamental characteristics of a person that influence thoughts and behaviors across situations and time—may be considered disordered if judged to be abnormally rigid and maladaptive. Although treated separately by some, the commonly used categorical schemes include them as mental disorders, albeit on a separate "axis II" in the case of the DSM-IV. A number of different personality disorders are listed, including those sometimes classed as "eccentric", such as paranoid, schizoid and schizotypal personality disorders; types that have described as "dramatic" or "emotional", such as antisocial, borderline, histrionic or narcissistic personality disorders; and those sometimes classed as fear-related, such as anxious-avoidant, dependent, or obsessive-compulsive personality disorders. The personality disorders in general are defined as emerging in childhood, or at least by adolescence or early adulthood. The ICD also has a category for enduring personality change after a catastrophic experience or psychiatric illness. If an inability to sufficiently adjust to life circumstances begins within three months of a particular event or situation, and ends within six months after the stressor stops or is eliminated, it may instead be classed as an adjustment disorder. There is an emerging consensus that so-called "personality disorders", like personality traits in general, actually incorporate a mixture of acute dysfunctional behaviors that may resolve in short periods, and maladaptive temperamental traits that are more enduring. Furthermore, there are also non-categorical schemes that rate all individuals via a profile of different dimensions of personality without a symptom-based cutoff from normal personality variation, for example through schemes based on dimensional models.
Eating disorders involve disproportionate concern in matters of food and weight. Categories of disorder in this area include anorexia nervosa, bulimia nervosa, exercise bulimia or binge eating disorder.
Sexual and gender identity disorders may be diagnosed, including dyspareunia, gender identity disorder and ego-dystonic homosexuality. Various kinds of paraphilia are considered mental disorders (sexual arousal to objects, situations, or individuals that are considered abnormal or harmful to the person or others).
People who are abnormally unable to resist certain urges or impulses that could be harmful to themselves or others, may be classed as having an impulse control disorder, including various kinds of tic disorders such as Tourette's syndrome, and disorders such as kleptomania (stealing) or pyromania (fire-setting). Various behavioral addictions, such as gambling addiction, may be classed as a disorder. Obsessive-compulsive disorder can sometimes involve an inability to resist certain acts but is classed separately as being primarily an anxiety disorder.
The use of drugs (legal or illegal, including alcohol), when it persists despite significant problems related to its use, may be defined as a mental disorder. The DSM incorporates such conditions under the umbrella category of substance use disorders, which includes substance dependence and substance abuse. The DSM does not currently use the common term drug addiction, and the ICD simply refers to "harmful use". Disordered substance use may be due to a pattern of compulsive and repetitive use of the drug that results in tolerance to its effects and withdrawal symptoms when use is reduced or stopped.
People who suffer severe disturbances of their self-identity, memory and general awareness of themselves and their surroundings may be classed as having a dissociative identity disorder, such as depersonalization disorder or Dissociative Identity Disorder itself (which has also been called multiple personality disorder, or "split personality"). Other memory or cognitive disorders include amnesia or various kinds of old age dementia.
A range of developmental disorders that initially occur in childhood may be diagnosed, for example autism spectrum disorders, oppositional defiant disorder and conduct disorder, and attention deficit hyperactivity disorder (ADHD), which may continue into adulthood.
Conduct disorder, if continuing into adulthood, may be diagnosed as antisocial personality disorder (dissocial personality disorder in the ICD). Popularist labels such as psychopath (or sociopath) do not appear in the DSM or ICD but are linked by some to these diagnoses.
Somatoform disorders may be diagnosed when there are problems that appear to originate in the body that are thought to be manifestations of a mental disorder. This includes somatization disorder and conversion disorder. There are also disorders of how a person perceives their body, such as body dysmorphic disorder. Neurasthenia is an old diagnosis involving somatic complaints as well as fatigue and low spirits/depression, which is officially recognized by the ICD-10 but no longer by the DSM-IV.
There are attempts to introduce a category of relational disorder, where the diagnosis is of a relationship rather than on any one individual in that relationship. The relationship may be between children and their parents, between couples, or others. There already exists, under the category of psychosis, a diagnosis of shared psychotic disorder where two or more individuals share a particular delusion because of their close relationship with each other.
There are a number of uncommon psychiatric syndromes, which are often named after the person who first described them, such as Capgras syndrome, De Clerambault syndrome, Othello syndrome, Ganser syndrome, Cotard delusion, and Ekbom syndrome, and additional disorders such as the Couvade syndrome and Geschwind syndrome.
Various new types of mental disorder diagnosis are occasionally proposed. Among those controversially considered by the official committees of the diagnostic manuals include self-defeating personality disorder, sadistic personality disorder, passive-aggressive personality disorder and premenstrual dysphoric disorder.
Two recent unique unofficial proposals are solastalgia by Glenn Albrecht and hubris syndrome by David Owen. The application of the concept of mental illness to the phenomena described by these authors has in turn been critiqued by Seamus Mac Suibhne.
Signs and symptoms
The likely course and outcome of mental disorders varies, depends on numerous factors related to the disorder itself, the individual as a whole, and the social environment. Some disorders are transient, while others may be more chronic in nature.
Even those disorders often considered the most serious and intractable have varied courses i.e. schizophrenia, psychotic disorders, and personality disorders. Long-term international studies of schizophrenia have found that over a half of individuals recover in terms of symptoms, and around a fifth to a third in terms of symptoms and functioning, with some requiring no medication. At the same time, many have serious difficulties and support needs for many years, although "late" recovery is still possible. The World Health Organization concluded that the long-term studies' findings converged with others in "relieving patients, carers and clinicians of the chronicity paradigm which dominated thinking throughout much of the 20th century."
Around half of people initially diagnosed with bipolar disorder achieve syndromal recovery (no longer meeting criteria for the diagnosis) within six weeks, and nearly all achieve it within two years, with nearly half regaining their prior occupational and residential status in that period. However, nearly half go on to experience a new episode of mania or major depression within the next two years. Functioning has been found to vary, being poor during periods of major depression or mania but otherwise fair to good, and possibly superior during periods of hypomania in Bipolar II.
Some disorders may be very limited in their functional effects, while others may involve substantial disability and support needs. The degree of ability or disability may vary over time and across different life domains. Furthermore, continued disability has been linked to institutionalization, discrimination and social exclusion as well as to the inherent effects of disorders. Alternatively, functioning may be affected by the stress of having to hide a condition in work or school etc., by adverse effects of medications or other substances, or by mismatches between illness-related variations and demands for regularity.
It is also the case that, while often being characterized in purely negative terms, some mental traits or states labeled as disorders can also involve above-average creativity, non-conformity, goal-striving, meticulousness, or empathy. In addition, the public perception of the level of disability associated with mental disorders can change.
Nevertheless, internationally, people report equal or greater disability from commonly occurring mental conditions than from commonly occurring physical conditions, particularly in their social roles and personal relationships. The proportion with access to professional help for mental disorders is far lower, however, even among those assessed as having a severely disabling condition. Disability in this context may or may not involve such things as:
- Basic activities of daily living. Including looking after the self (health care, grooming, dressing, shopping, cooking etc.) or looking after accommodation (chores, DIY tasks etc.)
- Interpersonal relationships. Including communication skills, ability to form relationships and sustain them, ability to leave the home or mix in crowds or particular settings
- Occupational functioning. Ability to acquire a job and hold it, cognitive and social skills required for the job, dealing with workplace culture, or studying as a student.
In terms of total Disability-adjusted life years (DALYs), which is an estimate of how many years of life are lost due to premature death or to being in a state of poor health and disability, mental disorders rank amongst the most disabling conditions. Unipolar (also known as Major) depressive disorder is the third leading cause of disability worldwide, of any condition mental or physical, accounting for 65.5 million years lost. The total DALY does not necessarily indicate what is the most individually disabling, because it also depends on how common a condition is; for example, schizophrenia is found to be the most individually disabling mental disorder on average but is less common. Alcohol-use disorders are also high in the overall list, responsible for 23.7 million DALYs globally, while other drug-use disorders accounted for 8.4 million. Schizophrenia causes a total loss of 16.8 million DALY, and bipolar disorder 14.4 million. Panic disorder leads to 7 million years lost, obsessive-compulsive disorder 5.1, primary insomnia 3.6, and post-traumatic stress disorder 3.5 million DALYs.
The first ever systematic description of global disability arising in youth, published in 2011, found that among 10 to 24 year olds nearly half of all disability (current and as estimated to continue) was due to mental and neurological conditions, including substance use disorders and conditions involving self-harm. Second to this were accidental injuries (mainly traffic collisions) accounting for 12 percent of disability, followed by communicable diseases at 10 percent. The disorders associated with most disability in high income countries were unipolar major depression (20%) and alcohol use disorder (11%). In the eastern Mediterranean region it was unipolar major depression (12%) and schizophrenia (7%), and in Africa it was unipolar major depression (7%) and bipolar disorder (5%).
Suicide, which is often attributed to some underlying mental disorder, is a leading cause of death among teenagers and adults under 35. There are an estimated 10 to 20 million non-fatal attempted suicides every year worldwide.
Mental disorders can arise from multiple sources, and in many cases there is no single accepted or consistent cause currently established. An eclectic or pluralistic mix of models may be used to explain particular disorders. The primary paradigm of contemporary mainstream Western psychiatry is said to be the biopsychosocial model which incorporates biological, psychological and social factors, although this may not always be applied in practice.
Biological psychiatry follows a biomedical model where many mental disorders are conceptualized as disorders of brain circuits likely caused by developmental processes shaped by a complex interplay of genetics and experience. A common assumption is that disorders may have resulted from genetic and developmental vulnerabilities, exposed by stress in life (for example in a diathesis–stress model), although there are various views on what causes differences between individuals. Some types of mental disorder may be viewed as primarily neurodevelopmental disorders.
Evolutionary psychology may be used as an overall explanatory theory, while attachment theory is another kind of evolutionary-psychological approach sometimes applied in the context of mental disorders. Psychoanalytic theories have continued to evolve alongside cognitive-behavioral and systemic-family approaches. A distinction is sometimes made between a "medical model" or a "social model" of disorder and disability.
Studies have indicated that variation in genes can play an important role in the development of mental disorders, although the reliable identification of connections between specific genes and specific categories of disorder has proven more difficult. Environmental events surrounding pregnancy and birth have also been implicated. Traumatic brain injury may increase the risk of developing certain mental disorders. There have been some tentative inconsistent links found to certain viral infections,to substance misuse, and to general physical health.
Social influences have been found to be important, including abuse, neglect, bullying, social stress, and other negative or overwhelming life experiences. The specific risks and pathways to particular disorders are less clear, however. Aspects of the wider community have also been implicated, including employment problems, socioeconomic inequality, lack of social cohesion, problems linked to migration, and features of particular societies and cultures.
Abnormal functioning of neurotransmitter systems has been implicated in several mental disorders, including serotonin, norepinephrine, dopamine and glutamate systems. Differences have also been found in the size or activity of certain brain regions in some cases. Psychological mechanisms have also been implicated, such as cognitive (e.g. reasoning) biases, emotional influences, personality dynamics, temperament and coping style.
Psychiatrists seek to provide a medical diagnosis of individuals by an assessment of symptoms and signs associated with particular types of mental disorder. Other mental health professionals, such as clinical psychologists, may or may not apply the same diagnostic categories to their clinical formulation of a client's difficulties and circumstances. The majority of mental health problems are, at least initially, assessed and treated by family physicians (in the UK general practitioners) during consultations, who may refer a patient on for more specialist diagnosis in acute or chronic cases.
Routine diagnostic practice in mental health services typically involves an interview known as a mental status examination, where evaluations are made of appearance and behavior, self-reported symptoms, mental health history, and current life circumstances. The views of other professionals, relatives or other third parties may be taken into account. A physical examination to check for ill health or the effects of medications or other drugs may be conducted. Psychological testing is sometimes used via paper-and-pen or computerized questionnaires, which may include algorithms based on ticking off standardized diagnostic criteria, and in rare specialist cases neuroimaging tests may be requested, but such methods are more commonly found in research studies than routine clinical practice.
Time and budgetary constraints often limit practicing psychiatrists from conducting more thorough diagnostic evaluations. It has been found that most clinicians evaluate patients using an unstructured, open-ended approach, with limited training in evidence-based assessment methods, and that inaccurate diagnosis may be common in routine practice. In addition, comorbidity is very common in psychiatric diagnosis, where the same person meets the criteria for more than one disorder. On the other hand, a person may have several different difficulties only some of which meet the criteria for being diagnosed. There may be specific problems with accurate diagnosis in developing countries.
More structured approaches are being increasingly used to measure levels of mental illness.
- HoNOS is the most widely used measure in English mental health services, being used by at least 61 trusts. In HoNOS a score of 0-4 is given for each of 12 factors, based on functional living capacity.. Research has been supportive of HoNOS, although some questions have been asked about whether it provides adequate coverage of the range and complexity of mental illness problems, and whether the fact that often only 3 of the 12 scales vary over time gives enough subtlety to accurately measure outcomes of treatment. HoNOS is regarded as the best available tool.
- WEMWBS uses a similar approach, giving a score of 1-5 for each of 14 factors. However in this tool the factors are based very largely on feelings.
The 2004 WHO report "Prevention of Mental Disorders" stated that "Prevention of these disorders is obviously one of the most effective ways to reduce the [disease] burden."
Risk factors for mental illness include genetic inheritance, life experiences and substance intake. Risk factors involving parenting include parental rejection, lack of parental warmth, high hostility, harsh discipline, high maternal negative affect, parental favouritism, anxious childrearing, modelling of dysfunctional and drug-abusing behaviour, and child abuse (emotional, physical and sexual). Other risk factors may include family history (e.g. of anxiety), temperament and attitudes (e.g. pessimism). In schizophrenia and psychosis, risk factors include migration and discrimination, childhood trauma, bereavement or separation in families, and abuse of drugs.
Screening questionnaires for assessing the risks of developing a mental disorder face the problem of low specificity, which means that the majority of the people with an increased risk will not develop the disorder. However combining risk factors gives higher estimated risk. For instance risk of depression is higher for a widow who also has an illness and lives alone.
The 2011 European Psychiatric Association (EPA) guidance on prevention of mental disorders states "There is considerable evidence that various psychiatric conditions can be prevented through the implementation of effective evidence-based interventions."
A 2011 UK Department of Health report on the economic case for mental health promotion and mental illness prevention found that "many interventions are outstandingly good value for money, low in cost and often become self-financing over time, saving public expenditure".
Parenting has many mental illness causality effects, and evidence suggests that helping parents to be more effective with their children can address mental health needs.
For depressive disorders, research has shown a reduction in incidence of new cases when people participated in interventions, for instance by 22% and 38% in meta-analyses. In a study of patients with sub-threshold depression, those who received minimal-contact psychotherapy had an incidence of a major depressive disorder one year later a third lower (an incidence rate of 12% rather than 18%) than the control group. Such interventions also save costs. The Netherlands mental health care system provides preventive interventions, such as the Coping with Depression course for people with subthreshold depression. A meta-analysis showed that people who followed this course had a 38% lower incidence of developing a major depressive disorder than the control group. A stepped-care intervention (watchful waiting, CBT and medication if appropriate) achieved a 50% lower incidence rate in a patient group aged 75 or older. One depression study found a neutral effect compared to personal, social, and health education, and usual school provision, and included a comment on potential for increased depression scores from people who have received CBT due to greater self recognition and acknowledgement of existing symptoms of depression and negative thinking styles. Other studies have seen CBT effectiveness equal to other interventions, or neutral.
For anxiety disorders, use of CBT with people at risk has significantly reduced the number of episodes of generalized anxiety disorder and other anxiety symptoms, and also given significant improvements in explanatory style, hopelessness, and dysfunctional attitudes. Other interventions (parental inhibition reduction, behaviourism, parental modelling, problem-solving and communication skills) have also produced significant benefits. In another study 3% of the group receiving the CBT intervention developed GAD by 12 months postintervention compared with 14% in the control group. Subthreshold panic disorder sufferers were found to significantly benefit from use of CBT. Use of CBT was found to significantly reduce social anxiety prevalence.
For psychosis and schizophrenia, usage of a number of drugs has been associated with development of the disorder, including cannabis, cocaine, and amphetamines. Studies have shown reductions in onset through preventative CBT. Another study showed that schizophrenia prevalence in people with a high genetic risk was significantly influenced by the parenting and family environment.
For bipolar, stress (such as childhood adversity or highly conflictual families) is not a diagnostically specific causal agent, but does place genetically and biologically vulnerable individuals at risk for a more pernicious course of illness. There has been considerable debate regarding the causal relationship between usage ofcannabis and bipolar disorder.
Further research is needed both on mental health causal factors, and on the effectiveness of prevention programs. Universal preventions (aimed at a population that has no increased risk for developing a mental disorder, such as school programs or mass media campaigns) need very high numbers to be statistically valid (sometimes known as the "power" problem). Approaches to overcome this are (1) focus on high-incidence groups (e.g. by targeting groups with high risk factors), (2) use multiple interventions to achieve greater, and thus more statistically valid, effects, (3) use cumulative meta-analyses of many trials, and (4) run very large trials.
The 2009 US National Academies publication on preventing mental, emotional, and behavioral disorders among young people focused on recent research and program experience and stated that "A number of promotion and prevention programs are now available that should be considered for broad implementation." A 2011 review of this by the authors said "A scientific base of evidence shows that we can prevent many mental, emotional, and behavioral disorders before they begin" and made recommendations including
- supporting the mental health and parenting skills of parents,
- encouraging the developmental competencies of children and
- using preventive strategies particularly for children at risk (such as children of parents with mental illness, or with family stresses such as divorce or job loss).
The 2012 UK Schizophrenia Commission recommended "a preventative strategy for psychosis including promoting protective factors for mental wellbeing and reducing risks such as cannabis use in early adolescence."
It is already known that home visiting programs for pregnant women and parents of young children can produce replicable effects on children's general health and development in a variety of community settings. Similarly positive benefits from social and emotional education are well proven.
Assessing parenting capability has been raised in child protection and other contexts. Delaying of potential very young pregnancies could lead to better mental health causal risk factors such as improved parenting skills and more stable homes, and various approaches have been used to encourage such behaviour change. Compulsory contraception has been used to prevent future mental illness.
Prevention programs can face issues in (i) ownership, because health systems are typically targeted at current suffering, and (ii) funding, because program benefits come on longer timescales than the normal political and management cycle.Assembling collaborations of interested bodies appears to be an effective model for achieving sustained commitment and funding.
Mental health strategies
Prevention is currently a very small part of the spend of mental health systems. For instance the 2009 UK Department of Health analysis of prevention expenditure does not include any apparent spend on mental health. The situation is the same in research.
However prevention is beginning to appear in mental health strategies:
- In 2012 Mind, the UK mental health NGO, included "Staying well; Support people likely to develop mental health problems, to stay well." as its first goal for 2012-16.
- The 2011 mental health strategy of Manitoba (Canada) included intents to (i) reduce risk factors associated with mental ill-health and (ii) increase mental health promotion for both adults and children.
- The 2011 US National Prevention Strategy included mental and emotional well-being, with recommendations including (i) better parenting and (ii) early intervention.
- Australia's mental health plan for 2009-14 included "Prevention and Early Intervention" as priority 2.
- The 2008 EU "Pact for Mental Health" made recommendations for youth and education including (i) promotion of parenting skills, (ii) integration of socio-emotional learning into education curricular and extracurricular activities, and (iii) early intervention throughout the educational system.
Treatment and support for mental disorders is provided in psychiatric hospitals, clinics or any of a diverse range of community mental health services. A number of professions have developed that specialize in the treatment of mental disorders. This includes the medical specialty of psychiatry (including psychiatric nursing), the field of psychology known as clinical psychology, and the practical application of sociology known as social work. There is also a wide range of psychotherapists (including family therapy), counselors, and public health professionals. In addition, there are peer support roles where personal experience of similar issues is the primary source of expertise. The different clinical and scientific perspectives draw on diverse fields of research and theory, and different disciplines may favor differing models, explanations and goals.
In some countries services are increasingly based on a recovery approach, intended to support each individual's personal journey to gain the kind of life they want, although there may also be 'therapeutic pessimism' in some areas.
There are a range of different types of treatment and what is most suitable depends on the disorder and on the individual. Many things have been found to help at least some people, and a placebo effect may play a role in any intervention or medication. In a minority of cases, individuals may be treated against their will, which can cause particular difficulties depending on how it is carried out and perceived.
A major option for many mental disorders is psychotherapy. There are several main types. Cognitive behavioral therapy (CBT) is widely used and is based on modifying the patterns of thought and behavior associated with a particular disorder. Psychoanalysis, addressing underlying psychic conflicts and defenses, has been a dominant school of psychotherapy and is still in use. Systemic therapy or family therapy is sometimes used, addressing a network of significant others as well as an individual.
Some psychotherapies are based on a humanistic approach. There are a number of specific therapies used for particular disorders, which may be offshoots or hybrids of the above types. Mental health professionals often employ an eclectic or integrative approach. Much may depend on the therapeutic relationship, and there may be problems with trust, confidentiality and engagement.
A major option for many mental disorders is psychiatric medication and there are several main groups. Antidepressants are used for the treatment of clinical depression, as well as often for anxiety and a range of other disorders. Anxiolytics (including sedatives) are used for anxiety disorders and related problems such as insomnia. Mood stabilizers are used primarily in bipolar disorder. Antipsychotics are used for psychotic disorders, notably for positive symptoms in schizophrenia, and also increasingly for a range of other disorders. Stimulants are commonly used, notably for ADHD.
Despite the different conventional names of the drug groups, there may be considerable overlap in the disorders for which they are actually indicated, and there may also be off-label use of medications. There can be problems with adverse effects of medication and adherence to them, and there is also criticism of pharmaceutical marketing and professional conflicts of interest.
Electroconvulsive therapy (ECT) is sometimes used in severe cases when other interventions for severe intractable depression have failed. Psychosurgery is considered experimental but is advocated by certain neurologists in certain rare cases.
Counseling (professional) and co-counseling (between peers) may be used. Psychoeducation programs may provide people with the information to understand and manage their problems. Creative therapies are sometimes used, including music therapy, art therapy or drama therapy. Lifestyle adjustments and supportive measures are often used, including peer support, self-help groups for mental health and supported housing or supported employment (including social firms). Some advocate dietary supplements.
Reasonable accommodations (adjustments and supports) might be put in place to help an individual cope and succeed in environments despite potential disability related to mental health problems. This could include an emotional support animal or specifically trained psychiatric service dog.
Mental disorders are common. World wide more than one in three people in most countries report sufficient criteria for at least one at some point in their life. In the United States 46% qualifies for a mental illness at some point. An ongoing survey indicates that anxiety disorders are the most common in all but one country, followed by mood disorders in all but two countries, while substance disorders and impulse-control disorders were consistently less prevalent. Rates varied by region.
A review of anxiety disorder surveys in different countries found average lifetime prevalence estimates of 16.6%, with women having higher rates on average. A review of mood disorder surveys in different countries found lifetime rates of 6.7% for major depressive disorder (higher in some studies, and in women) and 0.8% for Bipolar I disorder.
A 2004 cross-Europe study found that approximately one in four people reported meeting criteria at some point in their life for at least one of the DSM-IV disorders assessed, which included mood disorders (13.9%), anxiety disorders (13.6%) or alcohol disorder (5.2%). Approximately one in ten met criteria within a 12-month period. Women and younger people of either gender showed more cases of disorder. A 2005 review of surveys in 16 European countries found that 27% of adult Europeans are affected by at least one mental disorder in a 12 month period.
An international review of studies on the prevalence of schizophrenia found an average (median) figure of 0.4% for lifetime prevalence; it was consistently lower in poorer countries.
Studies of the prevalence of personality disorders (PDs) have been fewer and smaller-scale, but one broad Norwegian survey found a five-year prevalence of almost 1 in 7 (13.4%). Rates for specific disorders ranged from 0.8% to 2.8%, differing across countries, and by gender, educational level and other factors. A US survey that incidentally screened for personality disorder found a rate of 14.79%.
Approximately 7% of a preschool pediatric sample were given a psychiatric diagnosis in one clinical study, and approximately 10% of 1- and 2-year-olds receiving developmental screening have been assessed as having significant emotional/behavioral problems based on parent and pediatrician reports.
While rates of psychological disorders are often the same for men and women, women tend to have a higher rate of depression. Each year 73 million women are afflicted with major depression, and suicide is ranked 7th as the cause of death for women between the ages of 20-59. Depressive disorders account for close to 41.9% of the disability from neuropsychiatric disorders among women compared to 29.3% among men.
|This section does not cite any references or sources. (May 2013)|
Ancient civilizations described and treated a number of mental disorders. The Greeks coined terms for melancholy, hysteria and phobia and developed the humorism theory. Mental disorders were described, and treatments developed, in Persia, Arabia and in the medieval Islamic world.
Conceptions of madness in the Middle Ages in Christian Europe were a mixture of the divine, diabolical, magical and humoral, as well as more down to earth considerations. In the early modern period, some people with mental disorders may have been victims of the witch-hunts but were increasingly admitted to local workhouses and jails or sometimes to private madhouses. Many terms for mental disorder that found their way into everyday use first became popular in the 16th and 17th centuries.
By the end of the 17th century and into the Enlightenment, madness was increasingly seen as an organic physical phenomenon with no connection to the soul or moral responsibility. Asylum care was often harsh and treated people like wild animals, but towards the end of the 18th century a moral treatment movement gradually developed. Clear descriptions of some syndromes may be rare prior to the 19th century.
Industrialization and population growth led to a massive expansion of the number and size of insane asylums in every Western country in the 19th century. Numerous different classification schemes and diagnostic terms were developed by different authorities, and the term psychiatry was coined, though medical superintendents were still known as alienists.
The turn of the 20th century saw the development of psychoanalysis, which would later come to the fore, along with Kraepelin's classification scheme. Asylum "inmates" were increasingly referred to as "patients", and asylums renamed as hospitals.
Europe and the U.S.
Early in the 20th century in the United States, a mental hygiene movement developed, aiming to prevent mental disorders. Clinical psychology and social work developed as professions. World War I saw a massive increase of conditions that came to be termed "shell shock".
World War II saw the development in the U.S. of a new psychiatric manual for categorizing mental disorders, which along with existing systems for collecting census and hospital statistics led to the first Diagnostic and Statistical Manual of Mental Disorders (DSM). The International Classification of Diseases (ICD) also developed a section on mental disorders. The term stress, having emerged out of endocrinology work in the 1930s, was increasingly applied to mental disorders.
Electroconvulsive therapy, insulin shock therapy, lobotomies and the "neuroleptic" chlorpromazine came to be used by mid-century. An antipsychiatry movement came to the fore in the 1960s. Deinstitutionalization gradually occurred in the West, with isolated psychiatric hospitals being closed down in favor of community mental health services. A consumer/survivor movement gained momentum. Other kinds of psychiatric medication gradually came into use, such as "psychic energizers" (later antidepressants) and lithium. Benzodiazepines gained widespread use in the 1970s for anxiety and depression, until dependency problems curtailed their popularity.
Advances in neuroscience, genetics and psychology led to new research agendas. Cognitive behavioral therapy and other psychotherapies developed. The DSM and then ICD adopted new criteria-based classifications, and the number of "official" diagnoses saw a large expansion. Through the 1990s, new SSRI-type antidepressants became some of the most widely prescribed drugs in the world, as later did antipsychotics. Also during the 1990s, a recovery approach developed.
Society and culture
Different societies or cultures, even different individuals in a subculture, can disagree as to what constitutes optimal versus pathological biological and psychological functioning. Research has demonstrated that cultures vary in the relative importance placed on, for example, happiness, autonomy, or social relationships for pleasure. Likewise, the fact that a behavior pattern is valued, accepted, encouraged, or even statistically normative in a culture does not necessarily mean that it is conducive to optimal psychological functioning.
People in all cultures find some behaviors bizarre or even incomprehensible. But just what they feel is bizarre or incomprehensible is ambiguous and subjective. These differences in determination can become highly contentious. Religious, spiritual, or transpersonal experiences and beliefs are typically not defined as disordered, especially if widely shared, despite meeting many criteria of delusional or psychotic disorders. Even when a belief or experience can be shown to produce distress or disability—the ordinary standard for judging mental disorders—the presence of a strong cultural basis for that belief, experience, or interpretation of experience, generally disqualifies it from counting as evidence of mental disorder.
The process by which conditions and difficulties come to be defined and treated as medical conditions and problems, and thus come under the authority of doctors and other health professionals, is known as medicalization or pathologization.
The consumer/survivor movement (also known as user/survivor movement) is made up of individuals (and organizations representing them) who are clients of mental health services or who consider themselves survivors of psychiatric interventions. Activists campaign for improved mental health services and for more involvement and empowerment within mental health services, policies and wider society. Patient advocacy organizations have expanded with increasing deinstitutionalization in developed countries, working to challenge the stereotypes, stigma and exclusion associated with psychiatric conditions. There is also a carers rights movement of people who help and support people with mental health conditions, who may be relatives, and who often work in difficult and time-consuming circumstances with little acknowledgement and without pay. An antipsychiatry movement fundamentally challenges mainstream psychiatric theory and practice, including in some cases asserting that psychiatric concepts and diagnoses of 'mental illness' are neither real nor useful. Alternatively, a movement for global mental health has emerged, defined as 'the area of study, research and practice that places a priority on improving mental health and achieving equity in mental health for all people worldwide'.
Current diagnostic guidelines, namely the DSM and to some extent the ICD, have been criticized as having a fundamentally Euro-American outlook. Opponents argue that even when diagnostic criteria are used across different cultures, it does not mean that the underlying constructs have validity within those cultures, as even reliable application can prove only consistency, not legitimacy. Advocating a more culturally sensitive approach, critics such as Carl Bell and Marcello Maviglia contend that the cultural and ethnic diversity of individuals is often discounted by researchers and service providers.
Cross-cultural psychiatrist Arthur Kleinman contends that the Western bias is ironically illustrated in the introduction of cultural factors to the DSM-IV. Disorders or concepts from non-Western or non-mainstream cultures are described as "culture-bound", whereas standard psychiatric diagnoses are given no cultural qualification whatsoever, revealing to Kleinman an underlying assumption that Western cultural phenomena are universal. Kleinman's negative view towards the culture-bound syndrome is largely shared by other cross-cultural critics. Common responses included both disappointment over the large number of documented non-Western mental disorders still left out and frustration that even those included are often misinterpreted or misrepresented.
Many mainstream psychiatrists are dissatisfied with the new culture-bound diagnoses, although for partly different reasons. Robert Spitzer, a lead architect of the DSM-III, has argued that adding cultural formulations was an attempt to appease cultural critics, and has stated that they lack any scientific rationale or support. Spitzer also posits that the new culture-bound diagnoses are rarely used, maintaining that the standard diagnoses apply regardless of the culture involved. In general, mainstream psychiatric opinion remains that if a diagnostic category is valid, cross-cultural factors are either irrelevant or are significant only to specific symptom presentations.
Clinical conceptions of mental illness also overlap with personal and cultural values in the domain of morality, so much so that it is sometimes argued that separating the two is impossible without fundamentally redefining the essence of being a particular person in a society. In clinical psychiatry, persistent distress and disability indicate an internal disorder requiring treatment; but in another context, that same distress and disability can be seen as an indicator of emotional struggle and the need to address social and structural problems. This dichotomy has led some academics and clinicians to advocate a postmodernist conceptualization of mental distress and well-being.
Such approaches, along with cross-cultural and "heretical" psychologies centered on alternative cultural and ethnic and race-based identities and experiences, stand in contrast to the mainstream psychiatric community's alleged avoidance of any explicit involvement with either morality or culture. In many countries there are attempts to challenge perceived prejudice against minority groups, including alleged institutional racism within psychiatric services. There are also ongoing attempts to improve professional cross cultural sensitivity.
Laws and policies
Three quarters of countries around the world have mental health legislation. Compulsory admission to mental health facilities (also known as involuntary commitment) is a controversial topic. It can impinge on personal liberty and the right to choose, and carry the risk of abuse for political, social and other reasons; yet it can potentially prevent harm to self and others, and assist some people in attaining their right to healthcare when they may be unable to decide in their own interests.
All human rights oriented mental health laws require proof of the presence of a mental disorder as defined by internationally accepted standards, but the type and severity of disorder that counts can vary in different jurisdictions. The two most often utilized grounds for involuntary admission are said to be serious likelihood of immediate or imminent danger to self or others, and the need for treatment. Applications for someone to be involuntarily admitted usually come from a mental health practitioner, a family member, a close relative, or a guardian. Human-rights-oriented laws usually stipulate that independent medical practitioners or other accredited mental health practitioners must examine the patient separately and that there should be regular, time-bound review by an independent review body. The individual should also have personal access to independent advocacy.
In order for involuntary treatment to be administered (by force if necessary), it should be shown that an individual lacks the mental capacity for informed consent (i.e. to understand treatment information and its implications, and therefore be able to make an informed choice to either accept or refuse). Legal challenges in some areas have resulted in supreme court decisions that a person does not have to agree with a psychiatrist's characterization of the issues as constituting an "illness", nor agree with a psychiatrist's conviction in medication, but only recognize the issues and the information about treatment options.
Proxy consent (also known as surrogate or substituted decision-making) may be transferred to a personal representative, a family member or a legally appointed guardian. Moreover, patients may be able to make, when they are considered well, an advance directive stipulating how they wish to be treated should they be deemed to lack mental capacity in future. The right to supported decision-making, where a person is helped to understand and choose treatment options before they can be declared to lack capacity, may also be included in legislation. There should at the very least be shared decision-making as far as possible. Involuntary treatment laws are increasingly extended to those living in the community, for example outpatient commitment laws (known by different names) are used in New Zealand, Australia, the United Kingdom and most of the United States.
The World Health Organization reports that in many instances national mental health legislation takes away the rights of persons with mental disorders rather than protecting rights, and is often outdated. In 1991, the United Nations adopted the Principles for the Protection of Persons with Mental Illness and the Improvement of Mental Health Care, which established minimum human rights standards of practice in the mental health field. In 2006, the UN formally agreed the Convention on the Rights of Persons with Disabilities to protect and enhance the rights and opportunities of disabled people, including those with psychosocial disabilities.
The term insanity, sometimes used colloquially as a synonym for mental illness, is often used technically as a legal term. The insanity defense may be used in a legal trial (known as the mental disorder defence in some countries).
Perception and discrimination
The social stigma associated with mental disorders is a widespread problem. The US Surgeon General stated in 1999 that: "Powerful and pervasive, stigma prevents people from acknowledging their own mental health problems, much less disclosing them to others." Employment discrimination is reported to play a significant part in the high rate of unemployment among those with a diagnosis of mental illness. An Australian study found that having a mental illness is a bigger barrier to employment than a physical disability.
A 2008 study by Baylor University researchers found that clergy in the US often deny or dismiss the existence of a mental illness. Of 293 Christian church members, more than 32 percent were told by their church pastor that they or their loved one did not really have a mental illness, and that the cause of their problem was solely spiritual in nature, such as a personal sin, lack of faith or demonic involvement. The researchers also found that women were more likely than men to get this response. All participants in both studies were previously diagnosed by a licensed mental health provider as having a serious mental illness. However, there is also research suggesting that people are often helped by extended families and supportive religious leaders who listen with kindness and respect, which can often contrast with usual practice in psychiatric diagnosis and medication.
- Media and general public
Media coverage of mental illness comprises predominantly negative and pejorative depictions, for example, of incompetence, violence or criminality, with far less coverage of positive issues such as accomplishments or human rights issues. Such negative depictions, including in children's cartoons, are thought to contribute to stigma and negative attitudes in the public and in those with mental health problems themselves, although more sensitive or serious cinematic portrayals have increased in prevalence.
In the United States, the Carter Center has created fellowships for journalists in South Africa, the U.S., and Romania, to enable reporters to research and write stories on mental health topics. Former US First Lady Rosalynn Carter began the fellowships not only to train reporters in how to sensitively and accurately discuss mental health and mental illness, but also to increase the number of stories on these topics in the news media. There is also a World Mental Health Day, which in the US and Canada falls within a Mental Illness Awareness Week.
The general public have been found to hold a strong stereotype of dangerousness and desire for social distance from individuals described as mentally ill. A US national survey found that a higher percentage of people rate individuals described as displaying the characteristics of a mental disorder as "likely to do something violent to others", compared to the percentage of people who are rating individuals described as being "troubled".
Recent depictions in media have included leading characters successfully living with and managing a mental illness, including in Homeland (TV, 2011, bi-polar) and Iron Man 3 (film, 2013, posttraumatic stress disorder).
Despite public or media opinion, national studies have indicated that severe mental illness does not independently predict future violent behavior, on average, and is not a leading cause of violence in society. There is a statistical association with various factors that do relate to violence (in anyone), such as substance abuse and various personal, social and economic factors.
In fact, findings consistently indicate that it is many times more likely that people diagnosed with a serious mental illness living in the community will be the victims rather than the perpetrators of violence. In a study of individuals diagnosed with "severe mental illness" living in a US inner-city area, a quarter were found to have been victims of at least one violent crime over the course of a year, a proportion eleven times higher than the inner-city average, and higher in every category of crime including violent assaults and theft. People with a diagnosis may find it more difficult to secure prosecutions, however, due in part to prejudice and being seen as less credible.
However, there are some specific diagnoses, such as childhood conduct disorder or adult antisocial personality disorder or psychopathy, which are defined by, or are inherently associated with, conduct problems and violence. There are conflicting findings about the extent to which certain specific symptoms, notably some kinds of psychosis (hallucinations or delusions) that can occur in disorders such as schizophrenia, delusional disorder or mood disorder, are linked to an increased risk of serious violence on average. The mediating factors of violent acts, however, are most consistently found to be mainly socio-demographic and socio-economic factors such as being young, male, of lower socioeconomic status and, in particular, substance abuse (including alcoholism) to which some people may be particularly vulnerable.
High-profile cases have led to fears that serious crimes, such as homicide, have increased due to deinstitutionalization, but the evidence does not support this conclusion. Violence that does occur in relation to mental disorder (against the mentally ill or by the mentally ill) typically occurs in the context of complex social interactions, often in a family setting rather than between strangers. It is also an issue in health care settings and the wider community.
Psychopathology in non-human primates has been studied since the mid-20th century. Over 20 behavioral patterns in captive chimpanzees have been documented as (statistically) abnormal for frequency, severity or oddness—some of which have also been observed in the wild. Captive great apes show gross behavioral abnormalities such as stereotypy of movements, self-mutilation, disturbed emotional reactions (mainly fear or aggression) towards companions, lack of species-typical communications, and generalized learned helplessness. In some cases such behaviors are hypothesized to be equivalent to symptoms associated with psychiatric disorders in humans such as depression, anxiety disorders, eating disorders and post-traumatic stress disorder. Concepts of antisocial, borderline and schizoid personality disorders have also been applied to non-human great apes.
The risk of anthropomorphism is often raised with regard to such comparisons, and assessment of non-human animals cannot incorporate evidence from linguistic communication. However, available evidence may range from nonverbal behaviors—including physiological responses and homologous facial displays and acoustic utterances—to neurochemical studies. It is pointed out that human psychiatric classification is often based on statistical description and judgment of behaviors (especially when speech or language is impaired) and that the use of verbal self-report is itself problematic and unreliable.
Psychopathology has generally been traced, at least in captivity, to adverse rearing conditions such as early separation of infants from mothers; early sensory deprivation; and extended periods of social isolation. Studies have also indicated individual variation in temperament, such as sociability or impulsiveness. Particular causes of problems in captivity have included integration of strangers into existing groups and a lack of individual space, in which context some pathological behaviors have also been seen as coping mechanisms. Remedial interventions have included careful individually tailored re-socialization programs, behavior therapy, environment enrichment, and on rare occasions psychiatric drugs. Socialization has been found to work 90% of the time in disturbed chimpanzees, although restoration of functional sexuality and care-giving is often not achieved.
Laboratory researchers sometimes try to develop animal models of human mental disorders, including by inducing or treating symptoms in animals through genetic, neurological, chemical or behavioral manipulation, but this has been criticized on empirical grounds and opposed on animal rights grounds.
- The United States Department of Health and Human Services. Mental Health: A Report of the Surgeon General. "Chapter 2: The Fundamentals of Mental Health and Mental Illness." pp 39 Retrieved May 21, 2012
- WHO International Consortium in Psychiatric Epidemiology (2000) Cross-national comparisons of the prevalences and correlates of mental disorders, Bulletin of the World Health Organization v.78 n.4
- Berrios G E (April 1999). "Classifications in psychiatry: a conceptual history". Aust N Z J Psychiatry 33 (2): 145–60. doi:10.1046/j.1440-1614.1999.00555.x. PMID 10336212.
- Perring, C. (2005) Mental Illness Stanford Encyclopedia of Philosophy
- Katsching, Heinz (February 2010). "Are psychiatrists an endangered species? Observations on internal and external challenges to the profession". World Psychiatry (World Psychiatric Association) 9 (1): 21–28. PMC 2816922. PMID 20148149.
- Kato, Tadafumi (October 2011). "A renovation of psychiatry is needed". World Psychiatrу (World Psychiatric Association) 10 (3): 198–199. PMC 3188773. PMID 21991278.
- Gazzaniga, M.S., & Heatherton, T.F. (2006). Psychological Science. New York: W.W. Norton & Company, Inc.
- WebMD Inc (2005, July 01). Mental Health: Types of Mental Illness. Retrieved April 19, 2007, from http://www.webmd.com/mental-health/mental-health-types-illness
- United States Department of Health & Human Services. (1999). Overview of Mental Illness. Retrieved April 19, 2007
- NIMH (2005) Teacher's Guide: Information about Mental Illness and the Brain Curriculum supplement from The NIH Curriculum Supplements Series
- Phillip W. Long M.D. (1995-2008). "Disorders". Internet Mental Health. Retrieved 5 October 2009.
- "Mental Health: Types of Mental Illness". WebMD. Retrieved 2009-09-29.
- Akiskal HS, Benazzi F (May 2006). "The DSM-IV and ICD-10 categories of recurrent [major] depressive and bipolar II disorders: evidence that they lie on a dimensional spectrum". J Affect Disord 92 (1): 45–54. doi:10.1016/j.jad.2005.12.035. PMID 16488021.
- Clark LA (2007). "Assessment and diagnosis of personality disorder: perennial issues and an emerging reconceptualization". Annu Rev Psychol 58: 227–57. doi:10.1146/annurev.psych.57.102904.190200. PMID 16903806.
- Morey LC, Hopwood CJ, Gunderson JG, et al. (July 2007). "Comparison of alternative models for personality disorders". Psychol Med 37 (7): 983–94. doi:10.1017/S0033291706009482. PMID 17121690.
- Gamma A, Angst J, Ajdacic V, Eich D, Rössler W (March 2007). "The spectra of neurasthenia and depression: course, stability and transitions". Eur Arch Psychiatry Clin Neurosci 257 (2): 120–7. doi:10.1007/s00406-006-0699-6. PMID 17131216.
- Trimble, M. (NaN undefined NaN). "Uncommon psychiatric syndromes, 4th edn: Edited by M David Enoch and Hadrian N Ball (Pp 260, pound25.00). Published by Arnold Publishers, London, 2001. ISBN 0-340-76388-4". Journal of Neurology, Neurosurgery & Psychiatry 73 (2): 211–c–212. doi:10.1136/jnnp.73.2.211-c.
- Mac Suibhne, S. (2009). "What makes "a new mental illness"?: The cases of solastalgia and hubris syndrome". Cosmos and History 5 (2): 210–225.
- Harrison G, Hopper K, Craig T, et al. (June 2001). "Recovery from psychotic illness: a 15- and 25-year international follow-up study". Br J Psychiatry 178 (6): 506–17. doi:10.1192/bjp.178.6.506. PMID 11388966. Retrieved 2008-07-04.
- Jobe TH, Harrow M (December 2005). "Long-term outcome of patients with schizophrenia: a review" (PDF). Canadian Journal of Psychiatry 50 (14): 892–900. PMID 16494258. Retrieved 2008-07-05.
- Tohen M, Zarate CA, Hennen J, et al. (December 2003). "The McLean-Harvard First-Episode Mania Study: prediction of recovery and first recurrence". Am J Psychiatry 160 (12): 2099–107. doi:10.1176/appi.ajp.160.12.2099. PMID 14638578.
- Judd LL, Akiskal HS, Schettler PJ, et al. (December 2005). "Psychosocial disability in the course of bipolar I and II disorders: a prospective, comparative, longitudinal study" (PDF). Arch. Gen. Psychiatry 62 (12): 1322–30. doi:10.1001/archpsyc.62.12.1322. PMID 16330720.
- Center for Psychiatric Rehabilitation What is Psychiatric Disability and Mental Illness? Boston University, Retrieved January 2012
- Pilgrim, David; Rogers, Anne (2005). A sociology of mental health and illness (3rd ed.). [Milton Keynes]: Open University Press. ISBN 0-335-21583-1.
- Ferney, V. (2003) The Hierarchy of Mental Illness: Which diagnosis is the least debilitating? New York City Voices Jan/March
- Ormel, J.; Petukhova, M., Chatterji, S., et al. (1 May 2008). "Disability and treatment of specific mental and physical disorders across the world". The British Journal of Psychiatry 192 (5): 368–375. doi:10.1192/bjp.bp.107.039107. PMC 2681238. PMID 18450663.
- Pamela Y. Collins, Vikram Patel, Sarah S. Joestl Grand challenges in global mental health 7 JULY 2011 | VOL 475 | NATURE | 27
- Gore, FM; Bloem, PJ, Patton, GC, Ferguson, J, Joseph, V, Coffey, C, Sawyer, SM, Mathers, CD (2011-06-18). "Global burden of disease in young people aged 10-24 years: a systematic analysis". Lancet 377 (9783): 2093–102. doi:10.1016/S0140-6736(11)60512-6. PMID 21652063.
- "CIS: UN Body Takes On Rising Suicide Rates – Radio Free Europe / Radio Liberty 2006".
- O'Connor, Rory; Sheehy, Noel (29 Jan 2000). Understanding suicidal behaviour. Leicester: BPS Books. pp. 33–37. ISBN 978-1-85433-290-5.
- Bertolote JM, Fleischmann A (October 2002). "Suicide and psychiatric diagnosis: a worldwide perspective". World Psychiatry 1 (3): 181–5. ISSN 1723-8617. PMC 1489848. PMID 16946849.
- "Cannabis and mental health". Rcpsych.ac.uk. Retrieved 2013-04-23.
- Long-term effects of alcohol#Mental health effects
- Anthony P. Winston, Elizabeth Hardwick and Neema Jaberi. "Neuropsychiatric effects of caffeine". Apt.rcpsych.org. Retrieved 2013-04-23.
- Kinderman P, Lobban F (2000). "Evolving formulations: Sharing complex information with clients". Behavioral and Cognitive Psychotherapy 28 (3): 307–10. doi:10.1017/S1352465800003118.
- HealthWise (2004) Mental Health Assessment. Yahoo! Health
- Davies T (May 1997). "ABC of mental health. Mental health assessment". BMJ 314 (7093): 1536–9. doi:10.1136/bmj.314.7093.1536. PMC 2126757. PMID 9183204.
- Kashner TM, Rush AJ, Surís A, et al. (May 2003). "Impact of structured clinical interviews on physicians' practices in community mental health settings". Psychiatr Serv 54 (5): 712–8. doi:10.1176/appi.ps.54.5.712. PMID 12719503.
- Shear MK, Greeno C, Kang J, et al. (April 2000). "Diagnosis of nonpsychotic patients in community clinics". Am J Psychiatry 157 (4): 581–7. doi:10.1176/appi.ajp.157.4.581. PMID 10739417.
- USA (2013-03-25). "Risk of psychopathology in adolescent offspr... [Br J Psychiatry. 2013] - PubMed - NCBI". Ncbi.nlm.nih.gov. Retrieved 2013-04-23.
- "Preventing Mental, Emotional, and Behavioral Disorders Among Young People: Progress and Possibilities". Books.nap.edu. 2007-02-11. Retrieved 2013-04-23.
- Pillemer, K.; Suitor, J. J.; Pardo, S.; Henderson, C. (2010). "Mothers' Differentiation and Depressive Symptoms Among Adult Children". Journal of Marriage and Family 72 (2): 333–345. doi:10.1111/j.1741-3737.2010.00703.x. PMC 2894713. PMID 20607119.
- USA (2013-03-25). "Prevention of anxiety disorders. [Int Rev Psychiatry. 2007] -PubMed - NCBI". Ncbi.nlm.nih.gov. Retrieved 2013-04-23.
- "The Report". The Schizophrenia Commission. 2012-11-13. Retrieved 2013-04-23.
- USA (2013-03-25). "Prevention of late-life depression in primar... [Am J Psychiatry. 2006] - PubMed - NCBI". Ncbi.nlm.nih.gov. Retrieved 2013-04-23.
- European Psychiatric Association (EPA) guidance on prevention of mental disorders, Campion, Bhui, Bhugrhttp (October 2011) www.europsy.net/wordpress/wp-content/uploads/2012/03/campion.pdf?rs_file_key=5624846944f61cd2c21ebb448044554
- "Mental health promotion and mental illness prevention: The economic case - 2011 - News - LSE Enterprise - Business and consultancy -Home". .lse.ac.uk. Retrieved 2013-04-23.
- USA. "Preventing Depression". Ncbi.nlm.nih.gov. Retrieved 2013-04-23.
- "Prevention of Major Depression by Ricardo F. Muñoz, Pim Cuijpers, Filip Smit, Alinne Barrera, Yan Leykin :: SSRN". Papers.ssrn.com. 2010-06-04. doi:10.1146/annurev-clinpsy-033109-132040. Retrieved 2013-04-23.
- USA (2013-03-25). "A randomized placebo-cont... [J Am Acad Child Adolesc Psychiatry. 2004] - PubMed - NCBI". Ncbi.nlm.nih.gov. Retrieved 2013-04-23.
- USA (2013-03-25). "Prevention of depressive symptoms in ... [J Consult Clin Psychol. 2007] - PubMed - NCBI". Ncbi.nlm.nih.gov. Retrieved 2013-04-23.
- USA. "Prevention of Depression in At-Risk Adolescents". Ncbi.nlm.nih.gov. Retrieved 2013-04-23.
- USA (2013-03-25). "Minimal-contact psychotherapy for sub-thresh... [Br J Psychiatry. 2004] - PubMed - NCBI". Ncbi.nlm.nih.gov. Retrieved 2013-04-23.
- USA (2013-03-25). "Cost-effectiveness of preventing depression ... [Br J Psychiatry. 2006] - PubMed - NCBI". Ncbi.nlm.nih.gov. Retrieved 2013-04-23.
- USA (2013-03-25). "Opportunities for cost-effective prevent... [Arch Gen Psychiatry. 2006] - PubMed - NCBI". Ncbi.nlm.nih.gov. Retrieved 2013-04-23.
- USA (2013-03-25). "Psychoeducational treatment and prevention ... [Clin Psychol Rev. 2009] - PubMed - NCBI". Ncbi.nlm.nih.gov. Retrieved 2013-04-23.
- USA. "Stepped-care prevention of anxiety and d... [Arch Gen Psychiatry. 2009] - PubMed - NCBI". Ncbi.nlm.nih.gov. Retrieved 2013-04-23.
- "Classroom based cognitive behavioural therapy in reducing symptoms of depression in high risk adolescents: pragmatic cluster randomised controlled trial". BMJ. Retrieved 2013-04-23.
- "School-Based Primary Prevention of Depressive Symptomatology in Adolescents". Jar.sagepub.com. 1993-04-01. Retrieved 2013-04-23.
- USA (2013-03-25). "Anxiety Sensitivity Amelioration Training (... [J Anxiety Disord. 2007] - PubMed - NCBI". Ncbi.nlm.nih.gov. Retrieved 2013-04-23.
- Diana M. Higgins and Jeffrey E. Hecker. "J Clin Psychiatry / Document Archive". Article.psychiatrist.com. Retrieved 2013-04-23.
- "Behavior Therapy - Prevention of panic disorder". ScienceDirect.com. Retrieved 2013-04-23.
- USA (2013-03-25). "Universal-based prevention of syndrom... [J Consult Clin Psychol. 2009] - PubMed - NCBI". Ncbi.nlm.nih.gov. Retrieved 2013-04-23.
- Picchioni MM, Murray RM. Schizophrenia. BMJ. 2007;335(7610):91–5. doi:10.1136/bmj.39227.616447.BE. PMID 17626963.
- "Early interventions to prevent psychosis: systematic review and meta-analysis". Ncbi.nlm.nih.gov. 2013-01-08. doi:10.1111/acps.12028. Retrieved 2013-04-23.
- USA (2013-03-25). "Randomized controlled trial of interventio... [J Clin Psychiatry. 2012] - PubMed - NCBI". Ncbi.nlm.nih.gov. Retrieved 2013-04-23.
- "Genotype-environment interaction in schizophrenia-spectrum disorder". Bjp.rcpsych.org. Retrieved 2013-04-23.
- USA (2013-03-25). "Prevention of bipolar disorder in at-risk c... [Dev Psychopathol. 2008] - PubMed - NCBI". Ncbi.nlm.nih.gov. Retrieved 2013-04-23.
- USA (2013-03-25). "Cannabis-Induced Bipolar Disorder with Psychotic Features". Ncbi.nlm.nih.gov. Retrieved 2013-04-23.
- "Preventing depression in adolescents". BMJ. Retrieved 2013-04-23.
- "The National Academies Press: Preventing Mental, Emotional, and Behavioral Disorders Among Young People: Progress and Possibilities". Nap.edu. Retrieved 2013-04-23.
- USA (2013-03-25). "World Psychiatry: Prevention of mental and behavioural disorders: implications for policy and practice". Ncbi.nlm.nih.gov. Retrieved 2013-04-23.
- William Beardslee, Peter Chien, Carl Bellhttp://ps.psychiatryonline.org/data/Journals/PSS/3936/pss6203_0247.pdf
- Olds, D. L.; Sadler, L.; Kitzman, H. (2007). "Programs for parents of infants and toddlers: Recent evidence from randomized trials". Journal of Child Psychology and Psychiatry 48 (3–4): 355–391. doi:10.1111/j.1469-7610.2006.01702.x. PMID 17355402.
- Durlak, J. A.; Weissberg, R. P.; Dymnicki, A. B.; Taylor, R. D.; Schellinger, K. B. (2011). "The Impact of Enhancing Students' Social and Emotional Learning: A Meta-Analysis of School-Based Universal Interventions". Child Development 82 (1): 405–432. doi:10.1111/j.1467-8624.2010.01564.x. PMID 21291449.
- "Assessing Parenting Skills and Competencies". Childwelfare.gov. Retrieved 2013-04-23.
- "Assessing parenting capacity - CYF Practice Centre". Practicecentre.cyf.govt.nz. Retrieved 2013-04-23.
- "Using financial incentives to achieve healthy behaviour". BMJ. Retrieved 2013-04-23.
- "Adolescent Pregnancy Prevention Interventions - Family Home Visiting". Health.state.mn.us. Retrieved 2013-04-23.
- "Teenage pregnancy and social disadvantage: systematic review integrating controlled trials and qualitative studies". BMJ. Retrieved 2013-04-23.
- "Massachusetts Teen Parent Programs | Massachusetts Alliance on Teen Pregnancy". Massteenpregnancy.org. 2013-03-28. Retrieved 2013-04-23.
- Philip Reilly, The surgical solution: a history of involuntary sterilization in the United States(Baltimore: Johns Hopkins University Press, 1991).
- "Australia and New Zealand Health Policy". Biomedcentral.com. Retrieved 2013-04-23.
- "Medical Research Council - National Prevention Research Initiative (NPRI)". Mrc.ac.uk. 2009-11-16. Retrieved 2013-04-23.
- "Our plans". Mind. Retrieved 2013-04-23.
- "Mental Health and Spiritual Health Care | Healthy Living, Seniors and Consumer Affairs | Province of Manitoba". Gov.mb.ca. Retrieved 2013-04-23.
- Andreasen NC (1 May 1997). "What is psychiatry?". Am J Psychiatry 154 (5): 591–3. PMID 9137110.
- University of Melbourne. (2005, August 19). What is Psychiatry?. Retrieved April 19, 2007, from http://www.psychiatry.unimelb.edu.au/info/what_is_psych.html
- California Psychiatric Association. (2007, February 28). Frequently Asked Questions About Psychiatry & Psychiatrists. Retrieved April 19, 2007, from http://www.calpsych.org/publications/cpa/faqs.html
- American Psychological Association, Division 12, http://www.apa.org/divisions/div12/aboutcp.html
- Golightley, M. (2004) Social work and Mental Health Learning Matters, UK
- Goldstrom ID, Campbell J, Rogers JA, et al. (January 2006). "National estimates for mental health mutual support groups, self-help organizations, and consumer-operated services". Adm Policy Ment Health 33 (1): 92–103. doi:10.1007/s10488-005-0019-x. PMID 16240075.
- The Joseph Rowntree Foundation (1998) The experiences of mental health service users as mental health professionals
- Chamberlin J (2005). "User/consumer involvement in mental health service delivery". Epidemiol Psichiatr Soc 14 (1): 10–4. doi:10.1017/S1121189X00001871. PMID 15792289.
- Terence V. McCann, John Baird, Eileen Clark, Sai Lu (2006). "Beliefs about using consumer consultants in inpatient psychiatric units". International Journal of Mental Health Nursing 15 (4): 258–265. doi:10.1111/j.1447-0349.2006.00432.x. PMID 17064322.
- Mind Disorders Encyclopedia Psychosurgery [Retrieved on August 5th 2008]
- Mashour GA, Walker EE, Martuza RL (June 2005). "Psychosurgery: past, present, and future" (PDF). Brain Res. Brain Res. Rev. 48 (3): 409–19. doi:10.1016/j.brainresrev.2004.09.002. PMID 15914249.
- Lakhan SE, Vieira KF (2008). "Nutritional therapies for mental disorders". Nutr J 7 (1): 2. doi:10.1186/1475-2891-7-2. PMC 2248201. PMID 18208598.
- Kessler RC, Berglund P, Demler O, Jin R, Merikangas KR, Walters EE (June 2005). "Lifetime prevalence and age-of-onset distributions of DSM-IV disorders in the National Comorbidity Survey Replication". Arch. Gen. Psychiatry 62 (6): 593–602. doi:10.1001/archpsyc.62.6.593. PMID 15939837.
- "The World Mental Health Survey Initiative".
- Demyttenaere K, Bruffaerts R, Posada-Villa J, et al. (June 2004). "Prevalence, severity, and unmet need for treatment of mental disorders in the World Health Organization World Mental Health Surveys". JAMA 291 (21): 2581–90. doi:10.1001/jama.291.21.2581. PMID 15173149.
- Somers JM, Goldner EM, Waraich P, Hsu L (February 2006). "Prevalence and incidence studies of anxiety disorders: a systematic review of the literature". Can J Psychiatry 51 (2): 100–13. PMID 16989109.
- Waraich P, Goldner EM, Somers JM, Hsu L (February 2004). "Prevalence and incidence studies of mood disorders: a systematic review of the literature". Can J Psychiatry 49 (2): 124–38. PMID 15065747.
- Kessler RC, Berglund P, Demler O, Jin R, Merikangas KR, Walters EE (June 2005). "Lifetime prevalence and age-of-onset distributions of DSM-IV disorders in the National Comorbidity Survey Replication". Arch. Gen. Psychiatry 62 (6): 593–602. doi:10.1001/archpsyc.62.6.593. PMID 15939837.
- Kessler RC, Chiu WT, Demler O, Merikangas KR, Walters EE (June 2005). "Prevalence, Severity, and Comorbidity of Twelve-month DSM-IV Disorders in the National Comorbidity Survey Replication (NCS-R)". Arch. Gen. Psychiatry 62 (6): 617–27. doi:10.1001/archpsyc.62.6.617. PMC 2847357. PMID 15939839.
- US National Institute of Mental Health (2006) The Numbers Count: Mental Disorders in America Retrieved May 2007
- Alonso J, Angermeyer MC, Bernert S, et al. (2004). "Prevalence of mental disorders in Europe: results from the European Study of the Epidemiology of Mental Disorders (ESEMeD) project". Acta Psychiatr Scand Suppl 109 (420): 21–7. doi:10.1111/j.1600-0047.2004.00327.x. PMID 15128384.
- Wittchen HU, Jacobi F (August 2005). "Size and burden of mental disorders in Europe—a critical review and appraisal of 27 studies". Eur Neuropsychopharmacol 15 (4): 357–76. doi:10.1016/j.euroneuro.2005.04.012. PMID 15961293.
- Saha S, Chant D, Welham J, McGrath J (May 2005). "A Systematic Review of the Prevalence of Schizophrenia". PLoS Med. 2 (5): e141. doi:10.1371/journal.pmed.0020141. PMC 1140952. PMID 15916472.
- Torgersen S, Kringlen E, Cramer V (June 2001). "The prevalence of personality disorders in a community sample". Arch. Gen. Psychiatry 58 (6): 590–6. doi:10.1001/archpsyc.58.6.590. PMID 11386989.
- Grant BF, Hasin DS, Stinson FS, et al. (July 2004). "Prevalence, correlates, and disability of personality disorders in the United States: results from the national epidemiologic survey on alcohol and related conditions". J Clin Psychiatry 65 (7): 948–58. doi:10.4088/JCP.v65n0711. PMID 15291684.
- Carter AS, Briggs-Gowan MJ, Davis NO (January 2004). "Assessment of young children's social-emotional development and psychopathology: recent advances and recommendations for practice". J Child Psychol Psychiatry 45 (1): 109–34. doi:10.1046/j.0021-9630.2003.00316.x. PMID 14959805.[page needed]
- World Health Organization Gender disparities and mental health: The Facts Last retrieved January 12, 2012
- Heinimaa M (October 2002). "Incomprehensibility: the role of the concept in DSM-IV definition of schizophrenic delusions". Med Health Care Philos 5 (3): 291–5. doi:10.1023/A:1021164602485. PMID 12517037.
- Pierre JM (May 2001). "Faith or delusion? At the crossroads of religion and psychosis". J Psychiatr Pract 7 (3): 163–72. doi:10.1097/00131746-200105000-00004. PMID 15990520.
- Johnson CV, Friedman HL (2008). "Enlightened or Delusional? Differentiating Religious, Spiritual, and Transpersonal Experiences from Psychopathology". Journal of Humanistic Psychology 48 (4): 505–27. doi:10.1177/0022167808314174.
- Everett B (1994). "Something is happening: the contemporary consumer and psychiatric survivor movement in historical context". Journal of Mind and Behavior 15 (1–2): 55–7.
- Rissmiller DJ, Rissmiller JH (June 2006). "Evolution of the antipsychiatry movement into mental health consumerism". Psychiatr Serv 57 (6): 863–6. doi:10.1176/appi.ps.57.6.863. PMID 16754765.
- Oaks D (August 2006). "The evolution of the consumer movement". Psychiatr Serv 57 (8): 1212; author reply 1216. doi:10.1176/appi.ps.57.8.1212. PMID 16870979.
- The Antipsychiatry Coalition. (2005, November 26). The Antipsychiatry Coalition. Retrieved April 19, 2007, from antipsychiatry.org
- O'Brien AP, Woods M, Palmer C (March 2001). "The emancipation of nursing practice: applying anti-psychiatry to the therapeutic community". Aust N Z J Ment Health Nurs 10 (1): 3–9. doi:10.1046/j.1440-0979.2001.00183.x. PMID 11421968.
- Weitz D (2003). "Call me antipsychiatry activist—not "consumer"". Ethical Hum Sci Serv 5 (1): 71–2. PMID 15279009.
- Patel V., Prince M. (2010). "Global mental health - a new global health field comes of age". JAMA 303 (19): 1976–1977. doi:10.1001/jama.2010.616. PMID 20483977.
- Widiger TA, Sankis LM (2000). "Adult psychopathology: issues and controversies". Annu Rev Psychol 51: 377–404. doi:10.1146/annurev.psych.51.1.377. PMID 10751976.
- Shankar Vedantam, Psychiatry's Missing Diagnosis: Patients' Diversity Is Often Discounted Washington Post: Mind and Culture, June 26
- Kleinman A (1997). "Triumph or pyrrhic victory? The inclusion of culture in DSM-IV". Harv Rev Psychiatry 4 (6): 343–4. doi:10.3109/10673229709030563. PMID 9385013.
- Bhugra, D. & Munro, A. (1997) Troublesome Disguises: Underdiagnosed Psychiatric Syndromes Blackwell Science Ltd
- Clark LA (2006). "The role of moral judgment in personality disorder diagnosis". J Pers Disord. 20 (2): 184–5. doi:10.1521/pedi.2006.20.2.184.
- Karasz A (April 2005). "Cultural differences in conceptual models of depression". Social Science in Medicine 60 (7): 1625–35. doi:10.1016/j.socscimed.2004.08.011. PMID 15652693.
- Tilbury, F.; Rapley, M. (2004). "'There are orphans in Africa still looking for my hands': African women refugees and the sources of emotional distress". Health Sociology Review 13 (1): 54–64. doi:10.5555/hesr.2004.13.1.54.
- Bracken P, Thomas P (March 2001). "Postpsychiatry: a new direction for mental health". BMJ 322 (7288): 724–7. doi:10.1136/bmj.322.7288.724. PMC 1119907. PMID 11264215.
- Lewis B (2000). "Psychiatry and Postmodern Theory". J Med Humanit 21 (2): 71–84. doi:10.1023/A:1009018429802.
- Kwate NO (2005). "The heresy of African-centered psychology". J Med Humanit 26 (4): 215–35. doi:10.1007/s10912-005-7698-x. PMID 16333686.
- "Commentary on institutional racism in psychiatry, 2007" (PDF). Retrieved 2013-04-23.
- World Health Organization (2005) WHO Resource Book on Mental Health: Human rights and legislation ISBN 924156282 (PDF)
- Sklar R (June 2007). "Starson v. Swayze: the Supreme Court speaks out (not all that clearly) on the question of "capacity"". Can J Psychiatry 52 (6): 390–6. PMID 17696026.
- Manitoba Family Services and Housing. The Vulnerable Persons Living with a Mental Disability Act, 1996
- ENABLE website UN section on disability
- "Mental Health: A Report of the Surgeon General - Chapter 8". Surgeongeneral.gov. Retrieved 2013-04-23.
- Stuart H (September 2006). "Mental illness and employment discrimination". Curr Opin Psychiatry 19 (5): 522–6. doi:10.1097/01.yco.0000238482.27270.5d. PMID 16874128.
- Lucas, Clay. "Stigma hurts job prospects". Sydney Morning Herald. Retrieved 13 October 2012.
- "Stop Stigma". Bipolarworld-net.canadawebhosting.com. 2002-04-29. Retrieved 2013-04-23.
- Read J, Haslam N, Sayce L, Davies E (November 2006). "Prejudice and schizophrenia: a review of the 'mental illness is an illness like any other' approach". Acta Psychiatr Scand 114 (5): 303–18. doi:10.1111/j.1600-0447.2006.00824.x. PMID 17022790.
- Study Finds Serious Mental Illness Often Dismissed by Local Church Newswise, Retrieved on October 15, 2008.
- Psychiatric diagnoses are less reliable than star signs Times Online, June 2009
- Coverdate J, Nairn R, Claasen D (2001). "Depictions of mental illness in print media: a prospective national sample". Australian and New Zealand Journal of Psychiatry 36 (5): 697–700. doi:10.1046/j.1440-1614.2002.00998.x. PMID 12225457.
- Edney, RD. (2004) Mass Media and Mental Illness: A Literature Review Canadian Mental Health Association
- Diefenbach DL (1998). "The portrayal of mental illness on prime-time television". Journal of Community Psychology 25 (3): 289–302. doi:10.1002/(SICI)1520-6629(199705)25:3<289::AID-JCOP5>3.0.CO;2-R.
- Sieff, E (2003). "Media frames of mental illnesses: The potential impact of negative frames". Journal of Mental Health 12 (3): 259–69. doi:10.1080/0963823031000118249.
- Wahl, O.F. (2003). "News Media Portrayal of Mental Illness: Implications for Public Policy". American Behavioral Scientist 46 (12): 1594–600. doi:10.1177/0002764203254615.
- The Carter Center (2008-07-18). "The Carter Center Awards 2008-2009 Rosalynn Carter Fellowships for Mental Health Journalism". Retrieved 2008-07-21.
- The Carter Center. "The Rosalynn Carter Fellowships For Mental Health JournalisM". Retrieved 2008-07-21.
- The Carter Center. "Rosalynn Carter's Advocacy in Mental Health". Retrieved 2008-07-21.
- Link BG, Phelan JC, Bresnahan M, Stueve A, Pescosolido BA (September 1999). "Public conceptions of mental illness: labels, causes, dangerousness, and social distance". Am J Public Health 89 (9): 1328–33. doi:10.2105/AJPH.89.9.1328. PMC 1508784. PMID 10474548.
- Pescosolido BA, Monahan J, Link BG, Stueve A, Kikuzawa S (September 1999). "The public's view of the competence, dangerousness, and need for legal coercion of persons with mental health problems". Am J Public Health 89 (9): 1339–45. doi:10.2105/AJPH.89.9.1339. PMC 1508769. PMID 10474550.
- Elbogen EB, Johnson SC (February 2009). "The intricate link between violence and mental disorder: results from the National Epidemiologic Survey on Alcohol and Related Conditions". Arch. Gen. Psychiatry 66 (2): 152–61. doi:10.1001/archgenpsychiatry.2008.537. PMID 19188537.
- Stuart H (June 2003). "Violence and mental illness: an overview". World Psychiatry 2 (2): 121–124. PMC 1525086. PMID 16946914.
- Brekke JS, Prindle C, Bae SW, Long JD (October 2001). "Risks for individuals with schizophrenia who are living in the community". Psychiatr Serv 52 (10): 1358–66. doi:10.1176/appi.ps.52.10.1358. PMID 11585953.
- Linda A. Teplin, PhD; Gary M. McClelland, PhD; Karen M. Abram, PhD; Dana A. Weiner, PhD (2005) Crime Victimization in Adults With Severe Mental Illness: Comparison With the National Crime Victimization Survey Arch Gen Psychiatry. 62(8):911-921.
- Petersilia, J.R. (2001) Crime Victims With Developmental Disabilities: A Review Essay Criminal Justice and Behavior, Vol. 28, No. 6, 655-694 (2001)
- Steadman HJ, Mulvey EP, Monahan J, Robbins PC, Appelbaum PS, Grisso T, Roth LH, Silver E. (1998) Violence by people discharged from acute psychiatric inpatient facilities and by others in the same neighborhoods. Archives of General Psychiatry. May;55(5):393-401.
- Fazel S, Gulati G, Linsell L, Geddes JR, Grann M (August 2009). "Schizophrenia and Violence: Systematic Review and Meta-Analysis". In McGrath, John. PLoS Med. 6 (8): e1000120. doi:10.1371/journal.pmed.1000120. PMC 2718581. PMID 19668362.
- Taylor, P.J., Gunn, J. (1999) Homicides by people with mental illness: Myth and reality British Journal of Psychiatry Volume 174, Issue JAN., 1999, Pages 9-14
- Solomon, PL., Cavanaugh, MM., Gelles, RJ. (2005). "Family Violence among Adults with Severe Mental Illness". Trauma, Violence, & Abuse 6 (1): 40–54. doi:10.1177/1524838004272464. PMID 15574672.
- Chou KR, Lu RB, Chang M (December 2001). "Assaultive behavior by psychiatric in-patients and its related factors". J Nurs Res 9 (5): 139–51. doi:10.1097/01.JNR.0000347572.60800.00. PMID 11779087.
- B. Lögdberg, L.-L. Nilsson, M. T. Levander, S. Levander (2004). "Schizophrenia, neighborhood, and crime". Acta Psychiatrica Scandinavica 110 (2): 92. doi:10.1111/j.1600-0047.2004.00322.x/abs/.
- Brüne M, Brüne-Cohrs U, McGrew WC, Preuschoft S (2006). "Psychopathology in great apes: concepts, treatment options and possible homologies to human psychiatric disorders". Neurosci Biobehav Rev 30 (8): 1246–59. doi:10.1016/j.neubiorev.2006.09.002. PMID 17141312.
- Ferdowsian, HR; Durham, DL, Kimwele, C, Kranendonk, G, Otali, E, Akugizibwe, T, Mulcahy, JB, Ajarova, L, Johnson, CM (2011). "Signs of mood and anxiety disorders in chimpanzees". In Callaerts, Patrick. PLoS ONE 6 (6): e19855. doi:10.1371/journal.pone.0019855. PMC 3116818. PMID 21698223.
- Fabrega H (2006). "Making sense of behavioral irregularities of great apes". Neurosci Biobehav Rev 30 (8): 1260–73; discussion 1274–7. doi:10.1016/j.neubiorev.2006.09.004. PMID 17079015.
- Lilienfeld SO, Gershon J, Duke M, Marino L, de Waal FB (December 1999). "A preliminary investigation of the construct of psychopathic personality (psychopathy) in chimpanzees (Pan troglodytes)". J Comp Psychol 113 (4): 365–75. doi:10.1037/0735-7036.113.4.365. PMID 10608560.
- Moran M (June 20, 2003). "Animals Can Model Psychiatric Symptoms". Psychiatric News 38 (12): 20.
- Sánchez MM, Ladd CO, Plotsky PM (2001). "Early adverse experience as a developmental risk factor for later psychopathology: evidence from rodent and primate models". Dev. Psychopathol. 13 (3): 419–49. doi:10.1017/S0954579401003029. PMID 11523842.
- Matthews K, Christmas D, Swan J, Sorrell E (2005). "Animal models of depression: navigating through the clinical fog". Neurosci Biobehav Rev 29 (4–5): 503–13. doi:10.1016/j.neubiorev.2005.03.005. PMID 15925695.
- Atkinson, J. (2006) Private and Public Protection: Civil Mental Health Legislation, Edinburgh, Dunedin Academic Press ISBN 1-903765-61-7
- Hockenbury, Don and Sandy (2004). Discovering Psychology. Worth Publishers. ISBN 0-7167-5704-4.
- Fried, Yehuda and Joseph Agassi, (1976). Paranoia: A Study in Diagnosis. Boston Studies in the Philosophy of Science, 50. ISBN 90-277-0704-9.
- Fried, Yehuda and Joseph Agassi, (1983). Psychiatry as Medicine. The HAgue, Nijhoff. ISBN 90-247-2837-1.
- Porter, Roy (2002). Madness: a brief history. Oxford [Oxfordshire]: Oxford University Press. ISBN 0-19-280266-6.
- Weller M.P.I. and Eysenck M. The Scientific Basis of Psychiatry, W.B. Saunders, London, Philadelphia, Toronto etc. 1992
- Wiencke, Markus (2006) Schizophrenie als Ergebnis von Wechselwirkungen: Georg Simmels Individualitätskonzept in der Klinischen Psychologie. In David Kim (ed.), Georg Simmel in Translation: Interdisciplinary Border-Crossings in Culture and Modernity (pp. 123–155). Cambridge Scholars Press, Cambridge, ISBN 1-84718-060-5
|Wikimedia Commons has media related to: Mental and behavioural diseases and disorders|
- NIMH.NIH.gov - National Institute of Mental Health
- International Committee of Women Leaders on Mental Health
- Psychology Dictionary
- Mental Illness Watch
- Metapsychology Online Reviews: Mental Health
- The New York Times: Mental Health & Disorders
- The Guardian: Mental Health
- Mental Illness (Stanford Encyclopedia of Philosophy)
- "Insane, Statistics of". Encyclopedia Americana. 1920. | 3 | 27 |
<urn:uuid:7fb01995-41cf-4262-bfed-aa6cf6b457b6> | ||This article needs additional citations for verification. (September 2012)|
A time zone is a region on Earth that has a uniform standard time for legal, commercial, and social purposes. It is convenient for areas in close commercial or other communication to keep the same time, so time zones tend to follow the boundaries of countries and their subdivisions.
Most of the time zones on land are offset from Coordinated Universal Time (UTC) by a whole number of hours (UTC−12 to UTC+14), but a few are offset by 30 or 45 minutes. Some higher latitude countries use daylight saving time for part of the year, typically by changing clocks by an hour. Many land time zones are skewed toward the west of the corresponding nautical time zones. This also creates a permanent daylight saving time effect.
Early timekeeping
Before the invention of clocks, people marked the time of day with apparent solar time (also called "true" solar time) – for example, the time on a sundial – which was typically different for every settlement.
When well-regulated mechanical clocks became widespread in the early 19th century, each city began to use some local mean solar time. Apparent and mean solar time can differ by up to around 15 minutes (as described by the equation of time) due to the non-circular shape of the Earth's orbit around the sun and the tilt of the Earth's axis. Mean solar time has days of equal length, and the difference between the two averages to zero after a year.
Greenwich Mean Time (GMT) was established in 1675 when the Royal Observatory was built as an aid to mariners to determine longitude at sea, providing a standard reference time when each city in England kept a different local time.
Railroad time
Local solar time became increasingly awkward as railways and telecommunications improved, because clocks differed between places by an amount corresponding to the difference in their geographical longitude, which varied by four minutes for every degree of longitude. The difference between New York and Boston is about two degrees or 8 minutes, the difference between Sydney and Melbourne, Australia, is about 7 degrees or 28 minutes. Bristol is 2°35′ W(est) of Greenwich (East London), so when it is noon in Bristol is about 10 minutes past noon in London. The use of time zones smooths out these differences.
The first adaptation of a standard time in the world was established on December 1, 1847, in Great Britain by railway companies using GMT kept by portable chronometers. The first of these companies to adopt standard time was the Great Western Railway (GWR) in November 1840. This quickly became known as Railway Time. About August 23, 1852, time signals were first transmitted by telegraph from the Royal Observatory, Greenwich. Even though 98% of Great Britain's public clocks were using GMT by 1855, it was not made Britain's legal time until August 2, 1880. Some old British clocks from this period have two minute hands—one for the local time, one for GMT.
The increase in worldwide communication had further increased the need for interacting parties to communicate mutually comprehensible time references to one another. The problem of differing local times could be solved across larger areas by synchronizing clocks worldwide, but in many places the local time would then differ markedly from the solar time to which people were accustomed. Time zones were a compromise, relaxing the complex geographic dependence while still allowing local time to approximate the mean solar time.
On November 2, 1868, the then-British colony of New Zealand officially adopted a standard time to be observed throughout the colony, and was perhaps the first country to do so. It was based on the longitude 172°30′ East of Greenwich, that is 11 hours 30 minutes ahead of GMT. This standard was known as New Zealand Mean Time.
Timekeeping on the American railroads in the mid-19th century was somewhat confused. Each railroad used its own standard time, usually based on the local time of its headquarters or most important terminus, and the railroad's train schedules were published using its own time. Some major railroad junctions served by several different railroads had a separate clock for each railroad, each showing a different time; the main station in Pittsburgh, Pennsylvania, for example, kept six different times.
Charles F. Dowd proposed a system of one-hour standard time zones for American railroads about 1863, although he published nothing on the matter at that time and did not consult railroad officials until 1869. In 1870, he proposed four ideal time zones (having north–south borders), the first centered on Washington, D.C., but by 1872 the first was centered 75°W of Greenwich, with geographic borders (for example, sections of the Appalachian Mountains). Dowd's system was never accepted by American railroads. Instead, U.S. and Canadian railroads implemented a version proposed by William F. Allen, the editor of the Traveler's Official Railway Guide. The borders of its time zones ran through railroad stations, often in major cities. For example, the border between its Eastern and Central time zones ran through Detroit, Buffalo, Pittsburgh, Atlanta, and Charleston. It was inaugurated on Sunday, November 18, 1883, also called "The Day of Two Noons", when each railroad station clock was reset as standard-time noon was reached within each time zone. The zones were named Intercolonial, Eastern, Central, Mountain, and Pacific. Within one year, 85% of all cities with populations over 10,000, about 200 cities, were using standard time. A notable exception was Detroit (which is about half-way between the meridians of eastern time and central time), which kept local time until 1900, then tried Central Standard Time, local mean time, and Eastern Standard Time before a May 1915 ordinance settled on EST and was ratified by popular vote in August 1916. The confusion of times came to an end when Standard zone time was formally adopted by the U.S. Congress in the Standard Time Act of March 19, 1918.
Worldwide time zones
Although the first person to propose a worldwide system of time zones was Italian mathematician Quirico Filopanti in his book Miranda! published in 1858, his idea was unknown outside the pages of his book until long after his death, so it did not influence the adoption of time zones during the 19th century. He proposed 24 hourly time zones, which he called "longitudinal days", the first centered on the meridian of Rome. He also proposed a universal time to be used in astronomy and telegraphy.
Scottish-born Canadian Sir Sandford Fleming proposed a worldwide system of time zones in 1879. He advocated his system at several international conferences, thus is widely credited with their invention. In 1876, his first proposal was for a global 24-hour clock, conceptually located at the center of the Earth and not linked to any surface meridian. In 1879 he specified that his universal day would begin at the anti-meridian of Greenwich (180th meridian), while conceding that hourly time zones might have some limited local use. He also proposed his system at the International Meridian Conference in October 1884, but it did not adopt his time zones because they were not within its purview. The conference did adopt a universal day of 24 hours beginning at Greenwich midnight, but specified that it "shall not interfere with the use of local or standard time where desirable".
By about 1900, almost all time on Earth was in the form of standard time zones, only some of which used an hourly offset from GMT. Many applied the time at a local astronomical observatory to an entire country, without any reference to GMT. It took many decades before all time on Earth was in the form of time zones referred to some "standard offset" from GMT/UTC. Most major countries had adopted hourly time zones by 1929. Nepal was the last country to adopt a standard offset, shifting slightly to UTC+5:45 in 1986.
Today, all nations use standard time zones for secular purposes, but they do not all apply the concept as originally conceived. Newfoundland, India, Iran, Afghanistan, Venezuela, Burma, the Marquesas, as well as parts of Australia use half-hour deviations from standard time, and some nations, such as Nepal, and some provinces, such as the Chatham Islands, use quarter-hour deviations. Some countries, most notably China and India, use a single time zone, even though the extent of their territory far exceeds 15° of longitude. Before 1949, China used five time zones (see Time in China).
Before 1972, all time zones were specified as an offset from Greenwich Mean Time (GMT), which was the mean solar time at the meridian passing through the Royal Observatory in Greenwich, London. Since 1972 all official time services have broadcast radio time signals synchronized to UTC, a form of atomic time that includes leap seconds to keep it within 0.9 seconds of this former GMT, now called UT1. Many countries now legally define their standard time relative to UTC, although some still legally refer to GMT, including the United Kingdom itself. UTC, also called Zulu time, is used everywhere on Earth by astronomers and others who need to state the time of an event unambiguously.
Time zones are based on Greenwich Mean Time (GMT), the mean solar time at longitude 0° (the Prime Meridian). The definition of GMT was recently changed – it was previously the same as UT1, a mean solar time calculated directly from the rotation of the Earth. As the rate of rotation of the Earth is not constant, the time derived from atomic clocks was adjusted to closely match UT1. In January 1972, however, the length of the second in both Greenwich Mean Time and atomic time was equalized. The readings of participating atomic clocks are averaged out to give a uniform time scale.
Because the length of the average day is a small fraction of a second more than 24 hours (slightly more than 86400 seconds), leap seconds are periodically inserted into Greenwich Mean Time to make it approximate to UT1. This new time system is also called Coordinated Universal Time (UTC). Leap seconds are inserted to keep UTC within 0.9 seconds of UT1. Because the Earth's rotation is gradually slowing, leap seconds will need to be added more frequently in the future. However, from one year to the next the rotation rate is slightly irregular, so leap seconds are not added unless observations of Earth's rotation show that one is required. In this way, local times will continue to stay close to mean solar time and the effects of variations in Earth's rotation rate will be confined to simple step changes relative to the uniform time scale (International Atomic Time or TAI). All local times differ from TAI by an integral number of seconds. With the implementation of UTC, nations began to use it in the definition of their time zones. As of 2005, most nations had altered the definition of local time in this way.
In the UK, this involved redefining Greenwich Mean Time to make it the same as UTC. British Summer Time (BST) is still one hour in advance of Greenwich Mean Time and is therefore also one hour in advance of Coordinated Universal Time. Thus Greenwich Mean Time is the local time at the Royal Observatory, Greenwich between 0100 hours GMT on the last Sunday in October and 0100 hours GMT on the last Sunday in March. Similar circumstances apply in many other places.
Leap seconds are considered by many to be a nuisance, and ways to abolish them are being considered. This means letting the time difference accumulate. One suggestion is to insert a "leap-hour" in about 5,000 years. For more on this discussion read Proposal to abolish leap seconds.
Notation of time
ISO 8601
If the time is in Coordinated Universal Time (UTC), add a "Z" directly after the time without a space. "Z" is the zone designator for the zero UTC offset. "09:30 UTC" is therefore represented as "09:30Z" or "0930Z". "14:45:15 UTC" would be "14:45:15Z" or "144515Z".
UTC time is also known as "Zulu" time, since "Zulu" is the ICAO spelling alphabet code word for "Z".
Offsets from UTC
Offsets from UTC are written in the format ±[hh]:[mm], ±[hh][mm], or ±[hh]. So if the time being described is one hour ahead of UTC (such as the time in Berlin during the winter), the zone designator would be "+01:00", "+0100", or simply "+01". This is appended to the time in the same way that 'Z' was above. The offset from UTC changes with daylight saving time, e.g. a time offset in Chicago, which is in the North American Central Time Zone, would be "−06:00" for the winter (Central Standard Time) and "−05:00" for the summer
Time zones are often represented by abbreviations such as "EST, WST, CST" but these are not part of the international time and date standard ISO 8601 and their use as sole designator for a time zone is not recommended. Such designations can be ambiguous. For example, "BST", which is British Summer Time, was renamed "British Standard Time" between 1968 and 1971 when Central European Time was in force because legislators objected to calling it Central European Time. The same legislation affirmed that the Standard Time within the United Kingdom was, and would continue to be, Greenwich Mean Time.
UTC offsets worldwide
List of UTC offsets
These examples give the local time at various locations around the world when daylight saving time is not in effect:
Where the adjustment for time zones results in a time at the other side of midnight from UTC, then the date at the location is one day later or earlier.
Some examples when UTC is 23:00 on Monday when or where daylight saving time is not in effect:
Some examples when UTC is 02:00 on Tuesday when or where daylight saving time is not in effect:
- Honolulu, Hawaii, United States: UTC−10; 16:00 on Monday
- Toronto, Ontario, Canada: UTC−05; 21:00 on Monday
The time-zone adjustment for a specific location may vary because of daylight saving time. For example New Zealand, which is usually UTC+12, observes a one-hour daylight saving time adjustment during the Southern Hemisphere summer, resulting in a local time of UTC+13.
Time zone conversions
Conversion between time zones obeys the relationship
- "time in zone A" − "UTC offset for zone A" = "time in zone B" − "UTC offset for zone B",
in which each side of the equation is equivalent to UTC. (The more familiar term "UTC offset" is used here rather than the term "zone designator" used by the standard.)
The conversion equation can be rearranged to
- "time in zone B" = "time in zone A" − "UTC offset for zone A" + "UTC offset for zone B".
For example, what time is it in Los Angeles (UTC offset= −08) when the New York Stock Exchange opens at 09:30 (−05)?
- time in Los Angeles = 09:30 − (−05:00) + (−08:00) = 06:30.
In Delhi (UTC offset= +5:30), the New York Stock Exchange opens at
- time in Delhi = 09:30 − (−05:00) + (+5:30) = 20:00.
These calculations become more complicated near a daylight saving boundary (because the UTC offset for zone X is a function of the UTC time).
Nautical time zones
Since the 1920s a nautical standard time system has been in operation for ships on the high seas. Nautical time zones are an ideal form of the terrestrial time zone system. Under the system, a time change of one hour is required for each change of longitude by 15°. The 15° gore that is offset from GMT or UT1 (not UTC) by twelve hours is bisected by the nautical date line into two 7.5° gores that differ from GMT by ±12 hours. A nautical date line is implied but not explicitly drawn on time zone maps. It follows the 180th meridian except where it is interrupted by territorial waters adjacent to land, forming gaps: it is a pole-to-pole dashed line.
A ship within the territorial waters of any nation would use that nation's standard time, but would revert to nautical standard time upon leaving its territorial waters. The captain is permitted to change the ship's clocks at a time of the captain’s choice following the ship's entry into another time zone. The captain often chooses midnight. Ships going in shuttle traffic over a time zone border often keeps the same time zone all the time, to avoid confusion about work, meal and shop opening hours. Still the time table for port calls must follow the land time zone.
Skewing of zones
Ideal time zones, such as nautical time zones, are based on the mean solar time of a particular meridian located in the middle of that zone with boundaries located 7.5 degrees east and west of the meridian. In practice, zone boundaries are often drawn much farther to the west with often irregular boundaries, and some locations base their time on meridians located far to the east.
For example, even though the Prime Meridian (0°) passes through Spain and France, they use the mean solar time of 15 degrees east (Central European Time) rather than 0 degrees (Greenwich Mean Time). France previously used GMT, but was switched to CET (Central European Time) during the German occupation of the country during World War II and did not switch back after the war. Similarly, prior to World War II, the Netherlands observed "Amsterdam Time", which was twenty minutes ahead of Greenwich Mean Time. They were obliged to follow German time during the war, and kept it thereafter. In the mid 1970s the Netherlands, as with other European states, began observing daylight saving (summer) time.
There is a tendency to draw time zone boundaries far to the west of their meridians. Many of these locations also use daylight saving time. As a result, in the summer, solar noon in the Spanish town of Muxia occurs at 2:37 pm by the clock. This area of Spain never experiences sunset before 6:00 pm local time even in midwinter, despite its lying more than 40 degrees north of the equator. Near the summer solstice, Muxia has sunset times similar to those of Stockholm, which is in the same time zone and 16 degrees further north.
A more extreme example is Nome, Alaska, which is at 165°24′W longitude—just west of center of the idealized Samoa Time Zone (165°W). Nevertheless, Nome observes Alaska Time (135°W) with DST so it is slightly more than two hours ahead of the sun in winter and over three in summer. Kotzebue, Alaska, also near the same meridian but north of the Arctic Circle, has an annual event on 9 August to celebrate two sunsets in the same 24-hour day, one shortly after midnight at the start of the day, and the other shortly before midnight at the end of the day.
Daylight saving time
Many countries, and sometimes just certain regions of countries, adopt daylight saving time (also known as "Summer Time") during part of the year. This typically involves advancing clocks by an hour near the start of spring and adjusting back in autumn ("spring" forward, "fall" back). Modern DST was first proposed in 1907 and was in widespread use in 1916 as a wartime measure aimed at conserving coal. Despite controversy, many countries have used it off and on since then; details vary by location and change occasionally. Most countries around the equator do not observe daylight saving time, since the seasonal difference in sunlight is minimal.
Additional information
- France has twelve time zones including those of Metropolitan France, French Guiana and numerous islands, inhabited and uninhabited. Russia has nine time zones (and used to have 11 time zones before March 2010), eight contiguous zones plus Kaliningrad exclave on the Baltic Sea. The United States has ten time zones (nine official plus that for Wake Island and its Antarctic stations; no official time zone is specified for uninhabited Howland Island and Baker Island). Australia has nine time zones (one unofficial and three official on the mainland plus four for its territories and one more for an Antarctic station not included in other time zones). The United Kingdom has eight time zones for itself and its overseas territories. Canada has six official time zones. The Danish Realm has five time zones.
- In terms of area, China is the largest country with only one time zone (UTC+08). China also has the widest spanning time zone. Before 1949, China was separated into five time zones.
- Stations in Antarctica generally keep the time of their supply bases, thus both the Amundsen-Scott South Pole Station (U.S.) and McMurdo Station (U.S.) use New Zealand time (UTC+12 southern winter, UTC+13 southern summer).
- The 27°N latitude passes back and forth across time zones in South Asia. Pakistan: +05:00, India +05:30, Nepal +05:45, +05:30, Myanmar +06:30. This switching was more odd in 2002, when Pakistan enabled daylight saving time. Thus from west to east, time zones were: +06:00, +05:30, +05:45, +05:30, +08:00, +06:00, +05:30 and +06:30.
- Because the earliest and latest time zones are 26 hours apart, any given calendar date exists at some point on the globe for 50 hours. For example, April 11 begins in time zone UTC+14 at 10:00 UTC April 10, and ends in time zone UTC−12 at 12:00 UTC April 12.
- There are 22 places where three or more time zones meet, for instance at the tri-country border of Finland, Norway and Russia. 28 countries present such triple points, with China being the most present (in 13 of the 22 triple points). Then come India (7), Russia, India and Afghanistan (4).
- There are 40 time zones. This is due to fractional hour offsets and zones with offsets larger than 12 hours near the International Date Line as well as one unofficial zone in Australia. See the list of time zones.
- The largest time gap along a political border is the 3.5 hour gap along the border of China (UTC+08) and Afghanistan (UTC+04:30).
- One of the most unusual time zones is the Australian Central Western Time zone (CWST), which is a small strip of Western Australia from the border of South Australia west to 125.5°E, just before Caiguna. It is 8¾ hours ahead of UTC (UTC+08:45) and covers an area of about 35,000 km2, larger than Belgium, but has a population of about 200. Although unofficial, it is universally respected in the area—without it, the time gap in standard time at 129°E (the WA/SA border) would be 1.5 hours. See Time in Australia.
Computer systems and the Internet
Computer operating systems include the necessary support for working with all (or almost all) possible local times based on the various time zones. Internally, operating systems typically use UTC as their basic time-keeping standard, while providing services for converting local times to and from UTC, and also the ability to automatically change local time conversions at the start and end of daylight saving time in the various time zones. (See the article on daylight saving time for more details on this aspect).
Web servers presenting web pages primarily for an audience in a single time zone or a limited range of time zones typically show times as a local time, perhaps with UTC time in brackets. More internationally-oriented websites may show times in UTC only or using an arbitrary time zone. For example the international English-language version of CNN includes GMT and Hong Kong Time, whilst the US version shows Eastern Time. US Eastern Time and Pacific Time are also used fairly commonly on many US-based English-language websites with global readership. The format is typically based in the W3C Note "datetime".
Email systems and other messaging systems (IRC chat, etc.) time-stamp messages using UTC, or else include the sender's time zone as part of the message, allowing the receiving program to display the message's date and time of sending in the recipient's local time.
Database records that include a time stamp typically use UTC, especially when the database is part of a system that spans multiple time zones. The use of local time for time-stamping records is not recommended for time zones that implement daylight saving time due to the fact that once a year there is a one hour period when local times are ambiguous.
Operating systems
Most Unix-like systems, including Linux and Mac OS X, keep system time as UTC (Coordinated Universal Time). Rather than having a single time zone set for the whole computer, timezone offsets can vary for different processes. Standard library routines are used to calculate the local time based on the current timezone, normally supplied to processes through the TZ environment variable. This allows users in multiple timezones to use the same computer, with their respective local times displayed correctly to each user. Time zone information most commonly comes from the IANA time zone database. In fact, many systems, including anything using the GNU C Library, can make use of this database.
Microsoft Windows
Windows-based computer systems prior to Windows 2000 used local time, but Windows 2000 and later can use UTC as the basic system time. The system registry contains time zone information that includes the offset from UTC and rules that indicate the start and end dates for daylight saving in each zone. Interaction with the user normally uses local time, and application software is able to calculate the time in various zones. Terminal Servers allow remote computers to redirect their time zone settings to the Terminal Server so that users see the correct time for their time zone in their desktop/application sessions. Terminal Services uses the server base time on the Terminal Server and the client time zone information to calculate the time in the session.
Programming languages
While most application software will use the underlying operating system for timezone information, the Java Platform, from version 1.3.1, has maintained its own timezone database. This database will need to be updated whenever timezone rules change. Sun provides an updater tool for this purpose.
As an alternative to the timezone information bundled with the Java Platform, programmers may choose to use the Joda-Time library. This library includes its own timezone data based on the IANA time zone database.
The DateTime objects and related functions have been compiled into the PHP core since 5.2. This includes the ability to get and set the default script timezone, and DateTime is aware of its own timezone internally. PHP.net provides extensive documentation on this. As noted there, the most current timezone database can be implemented via the PECL timezonedb.
The standard module datetime stores and operates on the timezone information class tzinfo. The third party pytz module provides access to the full IANA time zone database. Negated time zone offset in seconds is stored time.timezone and time.altzone attributes.
Each Smalltalk dialect comes with its own built-in classes for dates, times and timestamps, only a few of which implement the DateAndTime and Duration classes as specified by the ANSI Smalltalk Standard. VisualWorks provides a TimeZone class that supports up to two annually recurring offset transitions, which are assumed to apply to all years (same behavior as Windows time zones). Squeak provides a Timezone class that does not support any offset transitions. Dolphin Smalltalk does not support time zones at all.
For full support of the tz database (zoneinfo) in a Smalltalk application (including support for any number of annually recurring offset transitions, and support for different intra-year offset transition rules in different years) the third-party, open-source, ANSI-Smalltalk-compliant Chronos Date/Time Library is available for use with any of the following Smalltalk dialects: VisualWorks, Squeak, Gemstone, or Dolphin.
- TIMESTAMP WITH TIME ZONE
- TIMESTAMP WITHOUT TIME ZONE
However, the standard has a somewhat naive understanding of time zones. It generally assumes a time zone can be specified by a simple offset from GMT. This causes problems when trying to do arithmetic on dates which span daylight saving time transitions or which span political changes in time zone rules.
Oracle Database is configured with a database time zone, and connecting clients are configured with session time zones. Oracle Database uses two data types to store time zone information:
- TIMESTAMP WITH TIME ZONE
- Stores date and time information with the offset from UTC
- TIMESTAMP WITH LOCAL TIME ZONE
- Stores date and time information with respect to the dbtimezone (which cannot be changed so long as there is a column in the db of this type), automatically adjusting the date and time from the stored time zone to the client's session time zone.
- TIMESTAMP WITH TIME ZONE
- Stores date and time in UTC and converts to the client's local time zone (which could be different for each client) for display purposes and conversion to other types.
- TIMESTAMP WITHOUT TIME ZONE
- Stores date and time without any conversion on input or output. When converting to a TIMESTAMP WITH TIME ZONE, interprets it according to the client's local time zone.
- TIME WITH TIME ZONE
- Stores time of day together with a UTC offset in which it is to be interpreted.
- TIME WITHOUT TIME ZONE
- Stores time of day without any time zone specification.
Microsoft Outlook
||This section may contain original research. (August 2011)|
Microsoft Outlook has a much-criticized behavior regarding time zone handling. Appointments stored in Outlook move when the computer changes time zone, since they are assumed to be fixed against UTC not against the hour number. As a consequence, someone who inserts an appointment requiring a travel into another timezone will not get a correct time for the appointment after travelling to the other time zone. For example, a New Yorker plans to meet someone in Los Angeles at 9:00 AM. He inserts an appointment at 9:00 AM in Outlook while his computer is on New York time. He travels to Los Angeles and adjusts his computer time zone, which causes the meeting to show up at 6:00 AM (9:00 New York time) in Outlook. One workaround is to adjust the clock but not the timezone of the computer when travelling. This will give sent e-mail wrong time stamp, and new meeting invitations will be wrong. Microsoft recommends to not change the clock at all and show a second time scale in the calendar. This will give reminder popups at the wrong time, since the clock does not match local time. The Outlook functionality will give correct time if the organizer invites the guest to a meeting using the "invite attendees" feature (the Los Angeles meeting will show up as 12:00 noon in the New Yorkers calendar, before he adjusted the time zone), but only if the time zone is adjusted when travelling. The Outlook functionality will also give correct time for telephone meetings.
For Outlook 2010 a new feature has been added, the possibility to specify which time zone an event occurs in. This solves most of these problems if properly used. An appointment at 9:00 AM Los Angeles time will show up as 12 PM but at 9 AM on the secondary scale if used.
Time zones in outer space
Orbiting spacecraft typically experience many sunrises and sunsets in a 24-hour period, or in the case of Apollo program astronauts travelling to the moon, none. Thus it is not possible to calibrate time zones with respect to the sun, and still respect a 24-hour sleep/wake cycle. A common practice for space exploration is to use the Earth-based time zone of the launch site or mission control. This keeps the sleeping cycles of the crew and controllers in sync. The International Space Station normally uses Coordinated Universal Time (UTC).
Timekeeping on Mars can be more complex, since the planet has a solar day of approximately 24 hours and 39 minutes, known as a sol. Earth controllers for some Mars missions have synchronized their sleep/wake cycles with the Martian day, because solar-powered rover activity on the surface was tied to periods of light and dark. The difference in day length caused the sleep/wake cycles to slowly drift with respect to the day/night cycles on Earth, repeating approximately once every 36 days.
See also
- Daylight saving time
- ISO 8601
- Jet lag
- List of time zone abbreviations
- List of time zones by country
- Lists of time zones
- Metric time
- Time by country
- World clock
- Universal Greeting Time
- Latitude and Longitude of World Cities http://www.infoplease.com/ipa/A0001769.html
- "Bristol Time". Wwp.greenwichmeantime.com. Retrieved 2011-12-05.
- PDF (1.89 MB)
- "Historymatters.gmu.edu". Historymatters.gmu.edu. Retrieved 2011-12-05.
- "Resolution concerning new standard time by Chicago". Sos.state.il.us. Retrieved 2011-12-05.
- Cooper, Bruce Clement (Consultant Editor) The Classic Western American Railroad Routes. New York: Chartwell Books/Worth Press, 2010. ISBN 978-0-7858-2573-9; ISBN 0-7858-2573-8; BINC: 3099794. pp 31
- Cooper, Bruce C. Railway & Travel Guides and the Pacific Railroad Central Pacific Railroad Photographic History Museum 2005
- Annual report of the Commissioner of Railroads made to the Secretary of the Interior for the year ending June 30, 1883, pp. 19–20. Washington, DC: Government Printing Office, 1883.
- Quirico Filopanti from scienzagiovane, Bologna University, Italy.
- Gianluigi Parmeggiani (Osservatorio Astronomico di Bologna), The origin of time zones
- The Astronomical Almanac 1983, US Government Printing Office (Washington) and Her Majesty's Stationery Office (London), page B4.
- Bowditch, Nathaniel. American Practical Navigator. Washington: Government Printing Office, 1925, 1939, 1975.
- Hill, John C., Thomas F. Utegaard, Gerard Riordan. Dutton's Navigation and Piloting. Annapolis: United States Naval Institute, 1958.
- Howse, Derek. Greenwich Time and the Discovery of the Longitude. Oxford: Oxford University Press, 1980. ISBN 0-19-215948-8.
- Poulle, Yvonne (1999). "La France à l'heure allemande". Bibliothèque de l'école des chartes 157 (2): 493–502. Retrieved 11 January 2012.
- Doug O'Hara (2007-03-11). "Alaska: daylight stealing time". Far North Science. Retrieved 2007-05-11.
- "International CNN". Edition.cnn.com. Retrieved 2011-12-05.
- "United States CNN". Cnn.com. Retrieved 2011-12-05.
- "Guidelines for Ubuntu IRC Meetings". Canonical Ltd. 2008-08-06.
- "Timezone Updater Tool". Java.sun.com. Retrieved 2011-12-05.
- "Joda-Time". Joda-time.sourceforge.net. Retrieved 2011-12-05.
- "tz database". Twinsun.com. 2007-12-26. Retrieved 2011-12-05.
- "DateTime". Php.net. Retrieved 2011-12-05.
- Author:. "pytz module". Pytz.sourceforge.net. Retrieved 2011-12-05.
- Chronos Date/Time Library
- "Work with time zones in Outlook". Office.microsoft.com. Retrieved 2011-12-05.
|Wikivoyage has travel information related to: Time zones|
- LEGAL TIME 2013
- W3C Note Datetime
- US Official Time Clock Java-enabled clock to graphically display night and day around the globe. | 1 | 4 |
<urn:uuid:219d676e-eeb0-44ac-95b4-6b5bdd34f5fa> | Heart Problems: Living With a Pacemaker
Introduction Back to top
A pacemaker keeps your heart from beating too slowly. It's important to know how this device works and how to keep it working right. Learning a few important facts about pacemakers can help you get the best results from your device.
You may have a device that combines a pacemaker and an implantable cardioverter-defibrillator (ICD), which can shock your heart back to a normal rhythm. For more information on ICDs, see Heart Problems: Living With an ICD.
- Avoid strong magnetic and electrical fields. These can keep your device from working right.
- Most office equipment and home appliances are safe to use. Learn which things you should use with caution and which you should stay away from.
- Be sure that any doctor, dentist, or other health professional you see knows that you have a pacemaker.
- Always carry a card in your wallet that tells what kind of device you have. Wear medical alert jewelry that says you have a pacemaker.
- Have your pacemaker checked regularly to make sure it is working right.
Pacemakers are small electrical devices that help control the timing of your heartbeat.
A pacemaker is implanted under the skin of your chest wall. The pacemaker's wires are passed through a vein into the chambers of your heart. The pacemaker sends out small electrical pulses that keep your heart from beating too slowly.
Test Your Knowledge
A pacemaker sends out mild electrical pulses that keep your heart from beating too slowly.
To be sure that your device is working right, you will need to have it checked regularly. Pacemakers can stop working because of loose or broken wires or other problems. Your doctor also will make sure that your pacemaker settings are right for what your body needs.
You may need to go to your doctor's office, or you may be able to get the device checked over the phone or the Internet.
Pacemakers run on batteries. In most cases, pacemaker batteries last 5 to 15 years. When it's time to replace the battery, you'll need another surgery, although it will be easier than the surgery you had to place the device. The surgery is easier, because your doctor doesn't have to replace the leads that go to your heart.
Test Your Knowledge
It's important to have your pacemaker checked regularly to make sure it is working right.
When you have a pacemaker, it's important to avoid strong magnetic and electrical fields. The lists below show electrical and magnetic sources and how they may affect your pacemaker. For best results, follow these guidelines. These safety tips also apply to devices that combine an ICD and a pacemaker. If you have questions, check with your doctor.
Stay away from:
Use with caution:
Safe to use:
Having medical tests and procedures
Most medical tests and procedures won't affect your pacemaker, except for MRI, which uses strong magnets. To be safe:
- Let your doctors, dentists, and other health professionals know that you have a pacemaker before you have any test, procedure, or surgery.
- Have your dentist talk to your doctor before you have any dental work or surgery.
- If you need physical therapy, have the therapist contact your doctor before using ultrasound, heat therapy, or electrical stimulation.
You can travel safely with a cardiac device. But you'll want to be prepared before you go.
- Bring a list of the names and phone numbers of your doctors.
- Bring your cardiac device identification card with you.
- Know what to do when going through airport security.
Talk to your doctor about how having a heart rhythm problem may affect your ability to drive.
Letting others know
- Carry a pacemaker identification card with you at all times. The card should include manufacturer information and the model number. Your doctor can give you an ID card.
- Wear medical alert jewelry stating that you have a pacemaker. You can buy this at most drugstores or online.
Going to follow-up visits
- Go to all your appointments with your doctor to make sure that your device is working right.
- Your doctor and/or the device maker will contact you about what to do if your device is recalled.
- If you take heart rhythm medicines, take them as prescribed. The medicines work with your pacemaker to help your heart keep a steady rhythm.
Pacemakers often are used to improve your ability to exercise. Talk to your doctor about the type and amount of exercise and other activity you can do.
- You may need to limit your activity if you have an irregular heart rate caused by heart failure or another heart problem.
- Don't play contact sports, such as soccer or basketball, because the device can be damaged. Sports such as swimming, running, walking, tennis, golf, and bicycling are safer.
Stop exercising and call your doctor if you have:
- Pressure or pain in your chest, neck, arm, jaw, or shoulder.
- Dizziness, lightheadedness, or nausea.
- Unusual shortness of breath or tiredness.
- A heartbeat that feels unusual for you: too fast, too slow, or skipping a beat.
- Other symptoms that cause you concern.
Most people who have a pacemaker can have an active sex life. After you get a pacemaker implanted, you'll let your chest heal for a short time. If your doctor says that you can exercise and be active, then it's probably safe for you to have sex.
Talk with your doctor if you have any concerns.
When to call a doctor
Call your doctor right away if you have symptoms that could mean your device isn't working properly, such as:
- Your heartbeat is very fast or slow, skipping, or fluttering.
- You feel dizzy, lightheaded, or faint.
- You have shortness of breath that is new or getting worse.
Call your doctor right away if you think you have an infection near your device. Signs of an infection include:
- Changes in the skin around your device, such as swelling, warmth, redness, and pain.
- An unexplained fever.
Test Your Knowledge
It's safe to use a cell phone, but don't keep it in a pocket directly over your pacemaker.
You need to carry a pacemaker ID card with you at all times. The card should include manufacturer information and the model number.
Return to topic:
References Back to top
Other Works Consulted
- Akoum NW, et al. (2008). Pacemaker therapy. In EG Nabel, ed., ACP Medicine, section 1, chap. 7. Hamilton, ON: BC Decker.
- Baddour LM, et al. (2010). Update on cardiovascular implantable electronic device infections and their management. A scientific statement from the American Heart Association. Circulation, 121(3): 458–477.
- Lee S, et al. (2009). Clinically significant magnetic interference of implanted cardiac devices by portable headphones. Heart Rhythm, 6(10): 1432–1436.
- Levine GN, et al. (2012). Sexual activity and cardiovascular disease: A scientific statement from the American Heart Association. Circulation, 125(8): 1058–1072.
- Sears SF, et al. (2005). How to respond to an implantable cardioverter-defibrillator shock. Circulation, 111(23): e380–e382.
- Swerdlow CD, et al. (2012). Pacemakers and implantable cardioverter-defibrillators. In RO Bonow et al., eds., Braunwald's Heart Disease: A Textbook of Cardiovascular Medicine, 9th ed., vol. 1, pp. 745–770. Philadelphia: Saunders.
- Wilkoff BL, et al. (2008). HRS/EHRA expert consensus on the monitoring of cardiovascular implantable electronic devices (CIEDS): Description of techniques, indications, personnel, frequency and ethical considerations. Heart Rhythm , 5(6): 907–925. Available online: http://www.hrsonline.org/Policy/ClinicalGuidelines/upload/cieds_guidelines.pdf.
Credits Back to top
|Primary Medical Reviewer||E. Gregory Thompson, MD - Internal Medicine|
|Specialist Medical Reviewer||Rakesh K. Pai, MD, FACC - Cardiology, Electrophysiology|
|Last Revised||April 20, 2012|
Last Revised: April 20, 2012
To learn more visit Healthwise.org
© 1995-2013 Healthwise, Incorporated. Healthwise, Healthwise for every health decision, and the Healthwise logo are trademarks of Healthwise, Incorporated. | 1 | 2 |
<urn:uuid:165ce069-67d9-461d-9cb3-2a7e39c20142> | ISO 8601 Data elements and interchange formats – Information interchange – Representation of dates and times is an international standard covering the exchange of date and time-related data. It was issued by the International Organization for Standardization (ISO) and was first published in 1988. The purpose of this standard is to provide an unambiguous and well-defined method of representing dates and times, so as to avoid misinterpretation of numeric representations of dates and times, particularly when data is transferred between countries with different conventions for writing numeric dates and times.
The standard organizes the data so the largest temporal term (the year) appears first in the data string and progresses to the smallest term (the second). It also provides for a standardized method of communicating time-based information across time zones by attaching an offset to Coordinated Universal Time (UTC).
|Date and time (current at page generation)
expressed according to ISO 8601:
|Combined date and time in UTC:||2013-06-19T19:26Z|
|Date with week number:||2013-W25-3|
The first edition of the ISO 8601 standard was published in 1988. It unified and replaced a number of older ISO standards on various aspects of date and time notation: ISO 2014, ISO 2015, ISO 2711, ISO 3307, and ISO 4031. It has been superseded by a second edition in 2000 and by the current third edition published on 3 December 2004.
ISO 2014 was the standard that originally introduced the all-numeric date notation in most-to-least-significant order [YYYY]-[MM]-[DD]. The ISO week numbering system was introduced in ISO 2015, and the identification of days by ordinal dates was originally defined in ISO 2711.
It is maintained by ISO Technical Committee TC 154.
The standard uses the Gregorian calendar, which serves as an international standard for civil use.
ISO 8601 fixes a reference calendar date to the Gregorian calendar of 20 May 1875 as the date the Convention du Mètre (Metre Convention) was signed in Paris. However, ISO calendar dates before the Convention are still compatible with the Gregorian calendar all the way back to the official introduction of the Gregorian calendar on 1582-10-15. Earlier dates, in the proleptic Gregorian calendar, may be used by mutual agreement of the partners exchanging information. The standard states that every date must be consecutive, so usage of the Julian calendar would be contrary to the standard (because at the switchover date, the dates would not be consecutive).
ISO 8601 prescribes, as a minimum, a four-digit year [YYYY] to avoid the year 2000 problem. It therefore represents years from 0000 to 9999, year 0000 being equal to 1 BCE and all others CE.
An expanded year representation [±YYYYY] must have an agreed-upon number of extra year digits beyond the four-digit minimum, and it should always be prefixed with a + or − sign instead of the common AD/CE or BC/BCE notation; by convention year zero is labelled positive: +0000. The addition of a year zero causes earlier years to be different by one when converted; for example, the year 3 BCE would be denoted by −0002.
|YYYY-MM||(but not YYYYMM)|
Calendar date representations are in the form as shown in the box to the right. [YYYY] indicates a four-digit year, 0000 through 9999. [MM] indicates a two-digit month of the year, 01 through 12. [DD] indicates a two-digit day of that month, 01 through 31. For example, "the 5th of April 1981" may be represented as either "1981-04-05" in the extended format or "19810405" in the basic format.
The standard also allows for calendar dates to be written with reduced precision. For example, one may write "1981-04" to mean "1981 April", and one may simply write "1981" to refer to that year or "19" to refer to the century from 1900 to 1999 inclusive.
Although the standard allows both the YYYY-MM-DD and YYYYMMDD formats for complete calendar date representations, if the day [DD] is omitted then only the YYYY-MM format is allowed. By disallowing dates of the form YYYYMM, the standard avoids confusion with the truncated representation YYMMDD (still often used).
Week date representations are in the format as shown in the box to the right. [YYYY] indicates the ISO week-numbering year which is slightly different from the calendar year (see below). [Www] is the week number prefixed by the letter W, from W01 through W53. [D] is the weekday number, from 1 through 7, beginning with Monday and ending with Sunday. This form is popular in the manufacturing industries.
There are mutually equivalent descriptions of week 01:
If 1 January is on a Monday, Tuesday, Wednesday or Thursday, it is in week 01. If 1 January is on a Friday, Saturday or Sunday, it is in week 52 or 53 of the previous year (there is no week 00). 28 December is always in the last week of its year.
The week number can be described by counting the Thursdays: week 12 contains the 12th Thursday of the year.
The ISO week-numbering year starts at the first day (Monday) of week 01 and ends at the Sunday before the new ISO year (hence without overlap or gap). It consists of 52 or 53 full weeks. The ISO week-numbering year number deviates from the number of the calendar year (Gregorian year) on a Friday, Saturday, and Sunday, or a Saturday and Sunday, or just a Sunday, at the start of the calendar year (which are at the end of the previous ISO week-numbering year) and a Monday, Tuesday and Wednesday, or a Monday and Tuesday, or just a Monday, at the end of the calendar year (which are in week 01 of the next ISO week-numbering year). For Thursdays, the ISO week-numbering year number is always equal to the calendar year number.
For an overview of week numbering systems see week number. The US system has weeks from Sunday through Saturday, and partial weeks at the beginning and the end of the year. An advantage is that no separate year numbering like the ISO week-numbering year is needed, while correspondence of lexicographical order and chronological order is preserved.
An ordinal date is a simple form for occasions when the arbitrary nature of week and month definitions are more of an impediment than an aid, for instance, when comparing dates from different calendars. As represented above, [YYYY] indicates a year. [DDD] is the day of that year, from 001 through 365 (366 in leap years). For example, "1981-04-05" is also "1981-095".
This format is used with simple hardware systems that have a need for a date system, but where including full calendar calculation software may be a significant nuisance. This system is sometimes incorrectly referred to as "Julian Date", whereas the astronomical Julian Date is a sequential count of the number of days since day 0 beginning 1 January 4713 BC Greenwich noon, Julian proleptic calendar (or noon on ISO date -4713-11-24 which uses the Gregorian proleptic calendar with a year ).
ISO 8601 uses the 24-hour clock system. The basic format is [hh][mm][ss] and the extended format is [hh]:[mm]:[ss].
So a time might appear as either "134730" in the basic format or "13:47:30" in the extended format.
It is also acceptable to omit lower order time elements for reduced accuracy: [hh]:[mm], [hh][mm] and [hh] are all used. (The use of [hh] alone is considered basic format.)
Midnight is a special case and can be referred to as both "00:00" and "24:00". The notation "00:00" is used at the beginning of a calendar day and is the more frequently used. At the end of a day use "24:00". Note that "2007-04-05T24:00" is the same instant as "2007-04-06T00:00" (see Combined date and time representations below).
Decimal fractions may also be added to any of the three time elements. A decimal mark, either a comma or a dot (without any preference as stated in resolution 10 of the 22nd General Conference CGPM in 2003, but with a preference for a comma according to ISO 8601:2004) is used as a separator between the time element and its fraction. A fraction may only be added to the lowest order time element in the representation. To denote "14 hours, 30 and one half minutes", do not include a seconds figure. Represent it as "14:30,5", "1430,5", "14:30.5", or "1430.5". There is no limit on the number of decimal places for the decimal fraction. However, the number of decimal places needs to be agreed to by the communicating parties.
If no UTC relation information is given with a time representation, the time is assumed to be in local time. While it may be safe to assume local time when communicating in the same time zone, it is ambiguous when used in communicating across different time zones. It is usually preferable to indicate a time zone (zone designator) using the standard's notation.
If the time is in UTC, add a Z directly after the time without a space. Z is the zone designator for the zero UTC offset. "09:30 UTC" is therefore represented as "09:30Z" or "0930Z". "14:45:15 UTC" would be "14:45:15Z" or "144515Z".
UTC time is also known as 'Zulu' time, since 'Zulu' is the NATO phonetic alphabet word for 'Z'.
The offset from UTC is given in the format ±[hh]:[mm], ±[hh][mm], or ±[hh]. So if the time being described is one hour ahead of UTC (such as the time in Berlin during the winter), the zone designator would be "+01:00", "+0100", or simply "+01". This is appended to the time in the same way that 'Z' was above. The offset from UTC changes with daylight saving time, e.g. a time offset in Chicago, would be "-06:00" for the winter (Central Standard Time) and "-05:00" for the summer (Central Daylight Time).
The following times all refer to the same moment: "18:30Z", "22:30+04", "1130-0700", and "15:00-03:30". Nautical time zone letters are not used with the exception of Z. To calculate UTC time one has to subtract the offset from the local time, e.g. for "15:00-03:30" do 15:00 − (−03:30) to get 18:30 UTC.
An offset of zero, in addition to having the special representation "Z", can also be stated numerically as "+00:00", "+0000", or "+00". However, it is not permitted to state it numerically with a negative sign, as "-00:00", "-0000", or "-00". The clause dictating sign usage (clause 3.4.2 in the 2004 edition of the standard) states that a plus sign must be used for a positive or zero value, and a minus sign for a negative value. Contrary to this rule, RFC 3339, which is otherwise a profile of ISO 8601, permits the use of "-00", with the same denotation as "+00" but a differing connotation.
A single point in time can be represented by concatenating a complete date expression, the letter T as a delimiter, and a valid time expression. For example "2007-04-05T14:30".
Either basic or extended formats may be used, but both date and time must use the same format. The date expression may be calendar, week, or ordinal, and must use a complete representation. The time expression may use reduced accuracy. It is permitted to omit the 'T' character by mutual agreement.
Durations are a component of time intervals and define the amount of intervening time in a time interval. They should only be used as part of a time interval as prescribed by the standard. Time intervals are discussed in the next section.
Durations are represented by the format P[n]Y[n]M[n]DT[n]H[n]M[n]S or P[n]W as shown to the right. In these representations, the [n] is replaced by the value for each of the date and time elements that follow the [n]. Leading zeros are not required, but the maximum number of digits for each element should be agreed to by the communicating parties. The capital letters P, Y, M, W, D, T, H, M, and S are designators for each of the date and time elements and are not replaced.
For example, "P3Y6M4DT12H30M5S" represents a duration of "three years, six months, four days, twelve hours, thirty minutes, and five seconds".
Date and time elements including their designator may be omitted if their value is zero, and lower order elements may also be omitted for reduced precision. For example, "P23DT23H" and "P4Y" are both acceptable duration representations.
To resolve ambiguity, "P1M" is a one-month duration and "PT1M" is a one-minute duration (note the time designator, T, that precedes the time value). The smallest value used may also have a decimal fraction, as in "P0.5Y" to indicate half a year. This decimal fraction may be specified with either a comma or a full stop, as in "P0,5Y" or "P0.5Y". The standard does not prohibit date and time values in a duration representation from exceeding their "carry over points" except as noted below. Thus, "PT36H" could be used as well as "P1DT12H" for representing the same duration.
Alternatively, a format for duration based on combined date and time representations may be used by agreement between the communicating parties either in the basic format PYYYYMMDDThhmmss or in the extended format P[YYYY]-[MM]-[DD]T[hh]:[mm]:[ss]. For example, the first duration shown above would be "P0003-06-04T12:30:05". However, individual date and time values cannot exceed their moduli (e.g. a value of 13 for the month or 25 for the hour would not be permissible).
A time interval is the intervening time between two time points. The amount of intervening time is expressed by a duration (as described in the previous section). The two time points (start and end) are expressed by either a combined date and time representation or just a date representation.
There are four ways to express a time interval:
Of these, the first three require two values separated by an interval designator which is usually a solidus or forward slash "/". Clause 4.4.2 of the standard notes that: "In certain application areas a double hyphen is used as a separator instead of a solidus." The standard does not define the term "double hyphen", but previous versions used notations like "2000--2002". Use of a double hyphen instead of a solidus allows inclusion in computer filenames. A solidus is a reserved character and not allowed in a filename in common operating systems.
For <start>/<end> expressions, if any elements are missing from the end value, they are assumed to be the same as for the start value including the time zone. This feature of the standard allows for concise representations of time intervals. For example, the date of a two-hour meeting including the start and finish times could be simply shown as "2007-12-14T13:30/15:30", where "/15:30" implies "/2007-12-14T15:30" (the same date as the start), or the beginning and end dates of a monthly billing period as "2008-02-15/03-14", where "/03-14" implies "/2008-03-14" (the same year as the start).
If greater precision is desirable to represent the time interval, then more time elements can be added to the representation. An observation period that has a duration of approximately three days, for example, can be succinctly shown as "2007-11-13/15", i.e. from any time on 2007-11-13 to any time on 2007-11-15. If a more exact start and end of the observation period need to be shown either for clarity or for measurement and recording purposes, the same time interval representation could be expanded to "2007-11-13T00:00/15T24:00", i.e. midnight at the start (T00:00) of 2007-11-13 to midnight at the end (T24:00) of 2007-11-15, a total of 72 hours.
Repeating intervals are specified in section "4.5 Recurring time interval". They are formed by adding "R[n]/" to the beginning of an interval expression, where R is used as the letter itself and [n] is replaced by the number of repetitions. Leaving out the value for [n] means an unbounded number of repetitions. So, to repeat the interval of "P1Y2M10DT2H30M" five times starting at "2008-03-01T13:00:00Z", use "R5/2008-03-01T13:00:00Z/P1Y2M10DT2H30M".
ISO 8601:2000 allowed truncation (by agreement), where leading components of a date or time are omitted. Notably, this allowed two-digit years to be used and the ambiguous formats YY-MM-DD and YYMMDD. This provision was removed in ISO 8601:2004.
On the Internet, the World Wide Web Consortium (W3C) uses ISO 8601 in defining a profile of the standard that restricts the supported date and time formats to reduce the chance of error and the complexity of software.
RFC 3339 defines a profile of ISO 8601 for use in Internet protocols and standards. It explicitly excludes durations and dates before the common era. The more complex formats such as week numbers and ordinal days are not permitted.
RFC 3339 deviates from ISO 8601 in allowing a zero timezone offset to be specified as "-00:00", which ISO 8601 forbids. RFC 3339 intends "-00:00" to carry the connotation that it is not stating a preferred timezone, whereas the conforming "+00:00" or any non-zero offset connotes that the offset being used is preferred. This convention regarding "-00:00" is derived from earlier RFCs, such as RFC 2822 which uses it for timestamps in email headers. RFC 2822 made no claim that any part of its timestamp format conforms to ISO 8601, and so was free to use this convention without conflict. RFC 3339 errs in adopting this convention while also claiming conformance to ISO 8601.
ISO 8601 is referenced by several specifications, but the full range of options of ISO 8601 is not always used. For example, the various electronic program guide standards for TV, digital radio, etc. use several forms to describe points in time and durations. The ID3 audio meta-data specification also makes use of a subset of ISO 8601. The GeneralizedTime makes use of another subset of ISO 8601.
The ISO 8601 week date, as of 2006, appeared in its basic form on major brand commercial packaging in the United States. Its appearance depended on the particular packaging, canning, or bottling plant more than any particular brand. The format is particularly useful for quality assurance, so that production errors can be readily traced to work weeks, and products can be correctly targeted for recall.
|Australia||AS ISO 8601-2007|
|Austria||OENORM EN 28601|
|Belgium||NBN EN 28601|
|Czech Republic||ČSN ISO 8601|
|European Norm||EN 28601:1992|
|France||NF Z69-200; NF EN 28601:1993-06-01|
|Germany||DIN ISO 8601:2006-09 (replaced DIN EN 28601:1993-02); related: DIN 5008:2011-04|
|Greece||ELOT EN 28601|
|Hungary||MSZ ISO 8601:2003|
|Iceland||IST EN 28601:1992|
|Italy||UNI EN 28601|
|Japan||JIS X 0301-2002|
|Lithuania||LST ISO 8601:1997|
|Netherlands||NEN ISO 8601 & NEN EN 28601 & NEN 2772|
|People's Republic of China||GB/T 7408-2005|
|Portugal||NP EN 28601|
|Russia||ГОСТ ИСО 8601-2001 (current), ГОСТ 7.64-90 (obsolete)|
|South Africa||ARP 010:1989|
|Spain||UNE EN 28601|
|Switzerland||SN ISO 8601:2005-08|
|Thailand||TIS 1111:2535 in 1992|
|UK||BS ISO 8601:2004, BS EN 28601 (1989-06-30)|
|Ukraine||ДСТУ ISO 8601:2010|
|US||ANSI INCITS 30-1997 (R2008) and NIST FIPS PUB 4-2|
Media related to ISO 8601 at Wikimedia Commons
▪ Premium designs
▪ Designs by country
▪ Designs by U.S. state
▪ Most popular designs
▪ Newest, last added designs
▪ Unique designs
▪ Cheap, budget designs
▪ Design super sale
DESIGNS BY THEME
▪ Accounting, audit designs
▪ Adult, sex designs
▪ African designs
▪ American, U.S. designs
▪ Animals, birds, pets designs
▪ Agricultural, farming designs
▪ Architecture, building designs
▪ Army, navy, military designs
▪ Audio & video designs
▪ Automobiles, car designs
▪ Books, e-book designs
▪ Beauty salon, SPA designs
▪ Black, dark designs
▪ Business, corporate designs
▪ Charity, donation designs
▪ Cinema, movie, film designs
▪ Computer, hardware designs
▪ Celebrity, star fan designs
▪ Children, family designs
▪ Christmas, New Year's designs
▪ Green, St. Patrick designs
▪ Dating, matchmaking designs
▪ Design studio, creative designs
▪ Educational, student designs
▪ Electronics designs
▪ Entertainment, fun designs
▪ Fashion, wear designs
▪ Finance, financial designs
▪ Fishing & hunting designs
▪ Flowers, floral shop designs
▪ Food, nutrition designs
▪ Football, soccer designs
▪ Gambling, casino designs
▪ Games, gaming designs
▪ Gifts, gift designs
▪ Halloween, carnival designs
▪ Hotel, resort designs
▪ Industry, industrial designs
▪ Insurance, insurer designs
▪ Interior, furniture designs
▪ International designs
▪ Internet technology designs
▪ Jewelry, jewellery designs
▪ Job & employment designs
▪ Landscaping, garden designs
▪ Law, juridical, legal designs
▪ Love, romantic designs
▪ Marketing designs
▪ Media, radio, TV designs
▪ Medicine, health care designs
▪ Mortgage, loan designs
▪ Music, musical designs
▪ Night club, dancing designs
▪ Photography, photo designs
▪ Personal, individual designs
▪ Politics, political designs
▪ Real estate, realty designs
▪ Religious, church designs
▪ Restaurant, cafe designs
▪ Retirement, pension designs
▪ Science, scientific designs
▪ Sea, ocean, river designs
▪ Security, protection designs
▪ Social, cultural designs
▪ Spirit, meditational designs
▪ Software designs
▪ Sports, sporting designs
▪ Telecommunication designs
▪ Travel, vacation designs
▪ Transport, logistic designs
▪ Web hosting designs
▪ Wedding, marriage designs
▪ White, light designs
▪ Magento store designs
▪ OpenCart store designs
▪ PrestaShop store designs
▪ CRE Loaded store designs
▪ Jigoshop store designs
▪ VirtueMart store designs
▪ osCommerce store designs
▪ Zen Cart store designs
▪ Flash CMS designs
▪ Joomla CMS designs
▪ Mambo CMS designs
▪ Drupal CMS designs
▪ WordPress blog designs
▪ Forum designs
▪ phpBB forum designs
▪ PHP-Nuke portal designs
ANIMATED WEBSITE DESIGNS
▪ Flash CMS designs
▪ Silverlight animated designs
▪ Silverlight intro designs
▪ Flash animated designs
▪ Flash intro designs
▪ XML Flash designs
▪ Flash 8 animated designs
▪ Dynamic Flash designs
▪ Flash animated photo albums
▪ Dynamic Swish designs
▪ Swish animated designs
▪ jQuery animated designs
▪ WebMatrix Razor designs
▪ HTML 5 designs
▪ Web 2.0 designs
▪ 3-color variation designs
▪ 3D, three-dimensional designs
▪ Artwork, illustrated designs
▪ Clean, simple designs
▪ CSS based website designs
▪ Full design packages
▪ Full ready websites
▪ Portal designs
▪ Stretched, full screen designs
▪ Universal, neutral designs
CORPORATE ID DESIGNS
▪ Corporate identity sets
▪ Logo layouts, logo designs
▪ Logotype sets, logo packs
▪ PowerPoint, PTT designs
▪ Facebook themes
VIDEO, SOUND & MUSIC
▪ Video e-cards
▪ After Effects video intros
▪ Special video effects
▪ Music tracks, music loops
▪ Stock music bank
GRAPHICS & CLIPART
▪ Pro clipart & illustrations, $19/year
▪ 5,000+ icons by subscription
▪ Icons, pictograms
|Custom Logo Design $149 ▪ Web Programming ▪ ID Card Printing ▪ Best Web Hosting ▪ eCommerce Software ▪ Add Your Link|
|© 1996-2013 MAGIA Internet Studio ▪ About ▪ Portfolio ▪ Photo on Demand ▪ Hosting ▪ Advertise ▪ Sitemap ▪ Privacy ▪ Maria Online| | 1 | 19 |
<urn:uuid:0d793013-5f7c-409c-b0d5-7fe44edd9cdd> | Photo by Jon Chase
Interview on Chinese Women's Lives
James Watson (left) in local teashop with two old friends, Mr. Tang Tim-sing and his father, Mr. Tang Ying-lin, in Ha Tsuen Village, Yuen Long District, Hong Kong New Territories (summer 1995). They are drinking a local favorite consisting of coffee, black tea, sugar, and canned condensed milk. James and Rubie Watson lived in Ha Tsuen for 15 months, 1977-1978. They return regularly to visit friends and learn of changes affecting village life.
A village girl with family friend at a local McDonald’s restaurant located in one of Hong Kong’s new towns (Tin Shui Wai in Yuen Long District, New Territories). This huge new city emerged from reclaimed land in the 1980s and 1990s. One of the first public facilities to appear was this McDonald’s restaurant, which immediately became an important center of community life. Prof. Watson’s 1990s research dealt with the localization of transnational fast food chains. To the child depicted in this photo (taken on June 1st, 1994) McDonald’s is not a foreign institution. The Big Mac has become local food for local people. Ronald McDonald (known in Cantonese as Uncle McDonald) is very much at home in the Hong Kong New Territories.
Dried and salted fish shops in Macau, 15 March 1997. Preserved fish like this once constituted a central feature of the diet in south China and was far more important that pork or poultry (which were too expensive for most people to consume regularly). The market in preserved fish has declined dramatically since the 1980s as families became more affluent and demanded fresh fish, pork, and chicken. Shops like this could once be found on every street corner in Macau, Hong Kong, and Guangzhou. Today a remaining few cater to yuppies who treat dried-salt fish as nostalgia cuisine and pay high prices for home-style dishes prepared in fancy restaurants.
Local women engaged in the worship of the local goddess Tianhou (known at Tin Hau in Cantonese) in San Tin Village, Yuen Long District, Hong Kong New Territories. The occasion was the opening ceremony of the local temple which was renovated in 1970 (it was originally built in the 14th century). James and Rubie Watson lived in San Tin for 17 months in 1969-1970. Rubie Watson has made a life-long study of Cantonese village women’s lives.
See Also: Dr. Rubie Watson’s website
This is a vegetarian meal to celebrate the opening of a new temple in the Hong Kong New Territories (note the Coca-Cola cans). Oysters are considered "vegetarian" (jai in Chinese) because they do not move; they "grow" like any other crop and are "planted" (by inserting stones) in tidal flats along Deep Bay in Pearl River Delta. During Buddhist festivals great mounds of oyster shells appear in New Territories villages.
This is dowry gold on display in gold shops located in the town of Yuen Long, Hong Kong New Territories. Gold jewelry is worn at weddings and becomes the personal property of the bride (she can keep her gold as security in case of family emergency or special gifts for her children and/or daughters-in-law).
Rubie Watson has written at length about dowry gold, see her article, Class Differences and Affinal Relations in South China, Man vol. 16, no. 3, pp. 593-615 (1981).
This is the goddess Tianhou (Cantonese Tin Hau), the patron deity of the two lineages (Man and Teng) studied by James and Rubie Watson. This representation sits on the altar of Sand River Temple, along the Laufaushan Coast, Yuen Long District, Hong Kong New Territories.
Tianhou is discussed at length in several of Prof. Watson’s publications, including: Standardizing the Gods: The Empress of Heaven (Tianhou) Along the South China Coast, 960-1960, in Popular Culture in Late Imperial China, ed. by David Johnson, et al., University of California Press, 1985.
See "Standardizing the Gods"
James L. Watson, Professor Emeritus
jwatson [at] @wjh.harvard.edu
Dr. Watson is Fairbank Professor of Chinese Society and Professor of
Anthropology, Emeritus. He retired from Harvard in 2011, after 40 years
of teaching. He previously taught at the University of London (School of
Oriental and African Studies), University of Pittsburgh, University of
Hawaii, and University of Houston.
He is Past-President of the Association of Asian Studies and Fellow of the
American Academy of Arts and Sciences. He was appointed Harvard College
Professor in 2003 in recognition of services to undergraduate teaching.
B.A. University of Iowa (Chinese Studies), 1965
Ph.D. University of California, Berkeley (Anthropology), 1972
Professor Watson is an ethnographer who has spent over 4 decades working in south China, primarily in villages (Guangdong, Jiangxi, and the Hong Kong region). He learned to speak country Cantonese in the Hong Kong New Territories during the late 1960s and has subsequently worked in many parts of the People’s Republic (using Mandarin). His research has focused on Chinese emigrants to London, ancestor worship and popular religion, family life and village organization, food systems, and the emergence of a post-socialist culture in the PRC. Prof. Watson also worked with graduate students in Harvard’s Department of Anthropology to investigate the impact of transnational food industries in East Asia, Europe, and Russia.
Follow the links to Prof. Watson’s English language publications. He has also published in Chinese and Japanese, and would be happy to send offprints to interested parties upon request.
SARS in China Prelude to Pandemic?
Between Two Cultures
Asian and African systems of slavery
Golden Arches East
Cultural Politics of Food
Class and social stratification in post-revolution China
Death Ritual in Late Imperial and Modern China
Emigration and the Chinese Lineage
Village Life in Hong Kong
Adoption and Lineage
Chinese Death Ritual: Introduction
Chinese Diaspora Formation
Chinese Kinship Reconsider
Big Mac Attack
Globalization in Encyclopedia Britanica
Obesity in China
Guide to Kinship Jargon
Standardizing the Gods
Feeding the Revoution
Forty Years on the Border
Geomancy and Colonialism
Rites or Beliefs
Transactions in People
Adoption of Outsiders
Chinese Lineage Reexamined
Of Flesh and Bones
Banqueting Rites in Hong Kong
Adoption and Lineage
Killing the Ancestors
Structure of Death Rites
Full publication List
San tin village, hong kong
The ethnographic photos in this series were taken by Prof. Watson in San Tin Village, Hong Kong New Territories, 1969-1970. San Tin is a single-lineage/single-surname village, which means that all people born into this community are direct descendants of one man (Man Sai-go). Members of the Man lineage have lived in this village for over seven hundred years. Today this kinship group constitutes a transnational diaspora with representatives living in over a dozen countries.
See Also: Prof. Watson’s Journal of Asian Studies article on this subject.
This is the main altar in the San Tin ancestral hall dedicated to the founder, Man Sai-go. Each wooden tablet is engraved with a specific (male) ancestor's name, the surname(s) of his wife (or wives), his generation number, and any titles he might have earned or received. Each tablet place (or wei, "seat") must be purchased by the family of the ancestor. New tablets can only be installed with the ancestral hall is renovated (once every two hundred years, on average); the altar is expanded during renovation making room for new generations. Elders told Prof. Watson (in the late 1960s) that each tablet contains an aspect of the ancestor's soul (shen); other aspects reside in the tomb and in the paper list of ancestors kept in the home. Copyright James L. Watson©
Descendants of Man Sai-go (the founding ancestor of the San Tin Man lineage) are gathered at his tomb during the 1970 Double-Nine Festival (ninth of the ninth lunar month). Roast pigs are presented at the tomb and later taken back to San Tin where they are divided among attendees. The local schoolmaster is reading an annual report ("worshipful words") to the ancestor, detailing the accounts of the founder's estate (proceeds from land and property owned in Man Sai-go's name). The report is being read in classical Chinese (wenyen), pronounced in Cantonese -- hence the smiles on some faces (no one but the ancestor and the school master can really understand what is being said). Each generation bows (Cantonese: kautau) to the ancestor as a unit; distinctions based on class, wealth, or status are ignored during the ritual. All descendants are equal in the eyes of the ancestor. The only person singled out for special attention is the lineage master (zuzhang), the oldest surviving elder in the most senior generation. In this photo the lineage master kneels in the first row along with his other generation mates. He holds a small tray containing a single teacup (which is used to pour out libations to the ancestor).
This is the gate to Yan Sau Wai, the original hamlet that was built by Man lineage farmers when they moved to the San Tin area. San Tin is the general name for a nucleated cluster of eight hamlets (not all of which are walled). Each hamlet has its own earth shrine and neighborhood rituals. Walled hamlets have been built in the Pearl River Delta for at least eight hundred years, dating from the first Han Chinese settlers. The wall compounds are called wai in Cantonese (wei in Mandarin) which means enclosure; most Wai had at least one watchtower. In effect, Wai are small fortresses that contain up to 100 small houses. Gates, including this one, were closed at night; the walls were six to ten feet high and approximately three feet thick (composed of rocks, gravel, and bricks sealed in a special lime-based covering). Delta villages like San Tin were prey to bandits and river pirates until the 1950s. Villagers had to provide their own security (including local militias and crop-watching societies) in the absence of effective state control. Rubie Watson spent many hours interviewing older women in walled hamlets like Yan Sau Wai. Once they reached advanced age, women seldom left the confines of their own hamlet and became active participants in the village security system (they kept close watch on everything that happened in their realm). This photo was taken in 1969. Yan Sau Wai's gate was "modernized" in the 1970s and is now covered by ceramic tiles.
In 1970 members of the Man lineage gathered in San Tin's central plaza (depicted here) to make the annual pilgrimage to their founder's tomb (see photo above: Founding Ancestor Worship). The local school children (90 percent of whom were Man) were given a holiday for the day. Elders (men aged 61 or older) were driven to the tomb in small buses hired for the day; children, youths, and one visiting anthropologist rode in the lorries shown in the photo. During that era, women did not participate in the ancestral rites at the founder's tomb (beginning in the 1980s Man daughters did start attending). The trucks and buses passed by neighboring lineage villages (honking horns and making as much racket as possible) to demonstrate to their traditional rivals that the Man lineage was prospering. Until the 1950s (when Hong Kong Police put an end to such violence) pilgrimages to founders’ tombs were occasions for battles between rival lineages. In 1970 several of the older men (some of whom are shown in the photo) confided to Prof. Watson that they missed the "good old days" when a little blood was shed in the service of the ancestor. By comparison, they said, the events depicted here were tame and a little boring. But the young people had a marvelous time, and looked forward to the event every year.
In 1970 the Man lineage at San Tin renovated the local Tianhou Temple and held an opera to honor the deity. (Tianhou, or Tinhau in Cantonese, is usually translated as "Empress of Heaven"; she is the patron deity for San Tin as well as many other lineage communities along the Pearl River Delta.) A temporary opera shed (depicted here) was erected and a Cantonese opera troupe was hired to perform for five days and nights. Relatives, neighbors, and friends from other villages came to San Tin for the festivities. Many of San Tin's emigrant workers returned from Europe (where they worked in the restaurant trade) to attend the opera. The event was financed by remittances from Europe.
Every year major lineages in the Hong Kong New Territories share pork among descendants of key ancestors. In this photo elders of the Man lineage (located in San Tin Village) carefully weigh and divide shares of meat paid for by the ancestor himself (even though he has been dead for seven centuries, he is very much "alive" socially - through the mechanism of his ancestral estate). Shares of this pork were given only to male descendants of the relevant ancestor, as verification of lineage membership. Pork divisions (fen jurou) like this are still observed in many parts of the New Territories. Prior to the 1960s, this may have been the only meat many people ate all year. Today the meat does not have real nutritional value, but it has great symbolic value to members of local lineages.
This is a typical Cantonese village house located in San Tin (the hamlet of Fan Tin Tsuen). The photo was taken in 1969. The dramatic arch at the top of the house is designed (according to San Tin elders) to deflect ghosts and bad fengshui (wind and water, known in English as geomancy). The terra-cotta frieze is an indication that the original builder was affluent by village standards. Other indicators of affluence are the stone threshold and the high-fired, thin bricks (known locally as "blue bricks," which are harder and longer lasting than standard low-fired, "brown" bricks). The wooden door is decorated with wood-block prints depicting door gods, which are thought to protect the household from intruders. These hand-drawn prints were purchased in the nearby market town of Yuen Long. Lunar New Year posters adorn the sides of the door; these are couplets of a poem in classical Chinese, produced -- on the spot -- by an itinerant calligrapher. The ceramic jug contains water carried from one of the village wells. The small stove on the left was used primarily to prepare pig feed from vegetation dredged from a nearby pond. The entire corpus of material culture depicted in the photo has disappeared. This house was demolished in the 1970s to make way for a new, two-story, modern house with electricity and running water. That house, in turn, was torn town in the early 1990s and a three-story house (with central air conditioning and satellite TV dish) now stands on the spot. Prof. Watson has been keeping track of housing changes since he first lived in San Tin during the late 1960s. The new houses were built by returned emigrants who made their fortunes in Europe or Canada. | 1 | 14 |
<urn:uuid:c9e321d1-5a7f-44b7-ad5e-076ae58e6252> | 11th Street Bridges, Washington DC
Case Study Introduction
The District Department of Transportation (DDOT) and the Federal Highway Administration (FHWA) initiated the 11th Street Bridges project in 2005 to improve the highway connection between the Southeast/Southwest Freeway (I-695) and the Anacostia Freeway (I-295 and DC-295) in southeast Washington DC. The project study area is shown in Figure 1. The project was to replace obsolete infrastructure, provide missing freeway connections to improve traffic flow to and from downtown Washington DC, discourage cut-through traffic on neighborhood streets, improve local access, and better link land uses across the Anacostia River.
Figure 1: Map of the 11th Street Bridges study area
When the Southeast/Southwest Freeway was built in the mid-1960s, regional plans expected it to extend across the river and then join the Anacostia Freeway. However, those plans were abandoned, and today there is no direct connection between the Southeast/Southwest Freeway and the Anacostia Freeway to the north of the 11th Street Bridge complex. Traffic, therefore, is forced to use neighborhood streets to access the 11th Street Bridge complex and cross the Anacostia River. The result is increased traffic on local neighborhood streets, such as Martin Luther King, Jr. Avenue, Good Hope Road, Pennsylvania Avenue, and Minnesota Avenue.
The 11th Street Bridge/Anacostia Freeway interchange does not allow traffic east of the Anacostia River to enter the Anacostia Freeway at this location. Drivers may cross the 11th Street Bridge toward downtown Washington DC or return, but they cannot enter or leave the Anacostia Freeway without taking neighborhood streets to adjacent interchanges at Pennsylvania Avenue, Howard Road, or South Capitol Street.
Because the project involves reconfiguring the ramps on either shore but does not involve adding capacity to the freeway system, the project termini are where the ramps merge back into the existing freeway.
DDOT and FHWA signed the Draft Environmental Impact Statement (DEIS) in June 2006, the Final Environmental Impact Statement (FEIS) in September 2008, and the Record of Decision in July 2009. Construction of the $300 million project is now under way (as of March 2010).
The 11th Street Bridges project is a key component in District of Columbia’s plans to revitalize the Anacostia riverfront. In March 2000, federal and District agencies signed an agreement forming the Anacostia Waterfront Initiative (AWI) to transform the Anacostia River into a revitalized urban waterfront. The AWI brought together 20 federal and District agencies that own or control land along the Anacostia River to sign the AWI Memorandum of Understanding, creating a partnership between the federal and District governments to transform the Anacostia River waterfront. With Washington DC’s downtown nearly built out, the city is growing eastward toward and across the Anacostia River. The District is committed to recentering its growth along the Anacostia River and improving long-neglected parks, environmental features, and infrastructure in the area. The 11th Street Bridges project falls within the context of the AWI and other planning activities within the project area.
The AWI fostered a number of transportation studies, one of which was the Middle Anacostia River Crossings Transportation Study (MAC Study). The study was undertaken to evaluate traffic conditions and to recommend options to improve bridge and bridge and roadway connections between the 11th Street and John Philip Sousa Bridges to enhance mobility on both sides of Anacostia River. The study proposed several short- and long-term improvements that include completing the 11th Street Bridge ramps to I‑295, reestablishing Barney Circle as an actual circle, separating the interstate (regional) traffic from the local traffic, riverfront access improvements, signage improvements, and pedestrian improvements. The findings and recommendations in the MAC Study formed the basis of the 11th Street Bridges’ alternative development and evaluation process.
Purpose and Need
The purpose of the 11th Street Bridges Project is fourfold:
- Reduce congestion and improve the mobility of traffic across the Anacostia River on the 11th Street Bridges and on the local streets in the area.
- Increase the safety of vehicular, pedestrian, and bicycle traffic in the Anacostia neighborhood.
- Replace deficient infrastructure and roadway design.
- Provide an alternative evacuation route and routes for security movements in and out of the nation’s capital.
The following transportation needs are to be met by the project:
- Improve Access and Reduce Congestion—Provide missing access to the Anacostia Freeway. Reduce volume of freeway traffic that spills onto the neighborhood streets due to current traffic patterns.
- Enhance Safety—Provide safe pedestrian and bicycle access across the river and to the Anacostia waterfront. Correct roadway design elements that reduce safety and result in congestion. Reduce number of vehicular crashes in the project interchanges.
- Correct Design Deficiencies—Replace bridges that are functionally and structurally obsolete. Improve signage in the project area to reduce confusion.
- Augment Homeland Security—Upgrade evacuation route for the nation’s capital and area military installations.
Travel Forecasting Summary
The Metropolitan Washington Council of Governments (MWCOG) model was used to generate traffic forecasts for the 2030 design year. The MWCOG model, which simulates transportation and land use conditions in the greater Washington DC region, encompasses more than 4,000 square miles. The model was developed to provide a basis for predicting the overall expected travel trends in future years, based on planned land-use development and highway network scenarios at the regional level. MWCOG’s model, which uses Version 2.1D #50 of Citilab’s TP+ program, meets USEPA requirements for air quality conformity analysis. It incorporates land use assumptions from MWCOG’s Round 6.4A. The Cooperative Forecasting Program, administered by the MWCOG, enables local, regional, and federal agencies to coordinate planning using common assumptions about future growth and development in the region. Each series of forecasts, or “round,” provides land use activity forecasts of employment, population, and households by five-year increments. Each round covers a period of 20 to 30 years. Round 6.4A represented the most recent land use forecast available at the time the travel forecasting work was carried out.
DDOT obtained a copy of the MWCOG model for use in forecasting the 11th Street Bridges traffic volumes. Forecast traffic volumes from the model were used for traffic operational analyses, air quality conformity analyses, and traffic noise analyses. The traffic forecasts were used to assess and compare travel conditions under a No-Build Alternative and each of the build alternatives. The following traffic/transportation analyses were completed for each of the alternatives:
- Prediction/modeling of future traffic and travel patterns
- Analysis of future traffic operations
- Comparison of access changes to key land uses or areas
- Analysis of travel times
- Evaluation of vehicular safety considerations
- Evaluation of impacts to pedestrians
- Evaluation of impacts to bicyclists
- Evaluation of impacts to transit operations
- Evaluation of impacts to freight
Case Study Illustration of the Guidance
The 11th Street Bridges project provides a good illustration of one of the key consideration contained in FHWA’s Guidance on the Application of Travel and Land Use Forecasting in NEPA. It was clear to the project team from the start that while MWCOG’s regional model can be effective in answering big-picture questions, it would be ineffective, without modification, in answering such project-specific questions as “What will be the effect of adding missing freeway connections to traffic volumes east and west of the Anacostia River?” Performing the upfront work, using the latest available data to refine the part of the regional model within the project study area, convinced DDOT, FHWA, and USEPA that the team was able to credibly compare alternatives in a forecast setting. This allowed the project team to proceed through the alternatives development and refinement phase in an efficient manner and to keep the fast-paced study on schedule. This case study emphasizes consideration 2 of the guidance: Suitability of Modeling Methods, Tools, and Underlying Data.
Key Consideration 2 of the Guidance: The Suitability of Modeling Methods, Tools, and Underlying Data
Age of Forecasts, Models, Data, and Methods
The first travel demand model was run in 2004 under DDOT's MAC Study, before the start of the 11th Street Bridges EIS. This gave DDOT enough data to determine that the best solution to the inefficient connections between the east and west sides of the Anacostia River in the study was to reconfigure the 11th Street Bridges interchange (rather than building a flyover at Pennsylvania Avenue, which had been proposed in the past).
In 2005, DDOT hired CH2MHILL to develop an EIS to evaluate replacing the 11th Street bridges and reconstructing the east-side interchange. For the DEIS, the travel demand model was run with the 2005 version of the MWCOG model. The MWCOG model network was refined for the 11th Street Bridges Study to represent future roadway networks based on transportation projects in MWCOG’s 2004 update to the Constrained Long-Range Plan and major projects in the FY2005–2010 Transportation Improvement Program. Both plans represented the most current information available at the start of the 11th Street Bridges project. The land use and other inputs to the MWCOG model were not changed for the purpose of the study. Because DDOT committed to no new net capacity on the system as a result of the 11th Street Bridges project, the emphasis was placed correctly on travel demand rather than land use. During the preparation of the DEIS, a new MWCOG model had been released with new land forecasts, so the project’s travel demand model was run again using the 2007 model. The project’s FEIS, ROD, and Interchange Justification Report were completed with the data from the 2007 model.
Calibration, Validation, and Reasonableness Checking of Travel Models
As noted, the MWCOG model simulates transportation and land-use conditions for a region around Washington DC encompassing more than 4,000 square miles. To allow a meaningful comparison between the traffic impacts of the project’s Build and No-Build alternatives within the study area, which constitutes a very small area within the regional model, the project team identified the boundaries of an area (subarea) that would be the focus of the project’s travel demand modeling efforts.
After receiving the 2005 version of the 2030 no-build model from MWCOG for the subarea identified above, the project team performed a quality check on the data within the subarea. The initial step in the quality check was to review documentation MWCOG published (FY-2004 Network Documentation: Highway and Transit Network Development, November 17, 2004) that listed all the roadway network assumptions in the model. In addition, the project team reviewed the Constrained Long-Range Plan, the Transportation Improvement Program, and DDOT’s AWI Transportation Master Plan for consistency and to determine which projects were included in the 2030 no-build forecast. The review uncovered the fact that the MWCOG model included all regional programs identified in the Constrained Long-Range Plan and Transportation Improvement Program, but no projects from the AWI Transportation Master Plan. The Master Plan lists 16 transportation projects, including the 11th Street Bridges and South Capitol Street Bridge projects, in the AWI study area and a proposed construction sequence. Because of limitations in funding and the expectation that not all these projects will affect traffic patterns or demand, it was determined that most of the Master Plan’s projects should not be included in the forecast traffic models. The two exceptions were the 11th Street Bridges and South Capitol Street Bridge projects, which are major improvements with dedicated funding and expected significant changes to the travel patterns in the AWI area. After coordination with FHWA, MWCOG, and USEPA, it was agreed that the 11th Street Bridges Project 2030 no-build model would be modified to include the assumed completion of the South Capitol Street Bridge Project. Likewise, the South Capitol Street Bridge Project 2030 no-build model roadway network would be modified to include the assumed completion of the 11th Street Bridges Project.
After resolving the range of projects to include in the 2030 no-build network, the team analyzed the roadway network in the immediate study area (subarea) and identified discrepancies between the MWCOG model roadway network and the study area roadway network. This was done by comparing data on the local road network contained in the project’s Existing Conditions Report to the model’s roadway network. The project team updated the subarea network by adding links and nodes not present in the regional model, modifying the number of lanes on network links, and incorporating (or removing) turn prohibitions at intersections.
Following the initial cleanup described above, the subarea model was plugged back in the MWCOG regional model, and the model was run to check the output against existing conditions in the study area. The final revisions to the subarea model were then made. At that point, the project team was able to code the 2030 build networks using the revised 2030 no-build network and make modifications reflective of the roadway designs for the project’s four build alternatives.
Refining the MWCOG model to reflect the transportation system characteristics within the 11th Street Bridges study area was critical to developing a no-build model that the project team and regulatory agencies trusted to reasonably represent the traffic impacts of the build alternatives. Without the modifications made by the team, the gross level of detail in the regional model would not have been able to identify the traffic performance distinctions among relatively similar build alternatives. As noted in the DEIS, all build alternatives “provide the same basic traffic service by providing eight freeway lanes and four local lanes over the Anacostia River along the same basic alignment as the current crossings. They all achieve separation of freeway traffic from local street traffic, and they all provide a safe river crossing for pedestrians and bicyclists. Every build alternative is designed to provide direct ramp connections, which do not currently exist, from the Anacostia Freeway north of 11th Street to I-295 over the Anacostia River. Every build alternative provides common ramp and access schemes for traffic west of the river.”
For projects such as the 11th Street Bridges that base purpose and need, in part, on addressing traffic deficiencies (improve access and reduce congestion; provide missing access to the Anacostia Freeway; reduce volume of freeway traffic that spills onto the neighborhood streets due to current traffic patterns), the inability to draw reliable traffic differences among alternatives eliminates traffic performance as an alternative screening consideration and makes it uncertain whether the alternatives ultimately serve purpose and need.
Additional Background and Sources
FEIS and ROD
The FEIS was produced as a two volume set. Chapter 4, Purpose and Need, 5, Alternatives, and 8, Traffic and Transportation Analyses, of the FEIS were sources of information for this case study. Appendix B of Volume 2, Traffic and Transportation, was also a reference.
DDOT and FHWA signed the Draft Environmental Impact Statement in June 2006, the Final Environmental Impact Statement in September 2008, and the Record of Decision in July 2009. Construction of the $300 million project is now under way (as of March 2010).
Documents available online at: http://ddot.dc.gov/DC/DDOT/Projects+and+Planning/Capital+Infrastructure+Projects/11th+Street+Bridge+Project
- Bart Clark
- District of Columbia Department of Transportation
- Director, Anacostia Waterfront Initiative
- 64 New York Avenue, NE
- Washington, DC 20002
- (202) 671-4696 | 1 | 2 |
<urn:uuid:1d95336e-f81d-4347-9748-f1ccb12e9064> | Embedded Systems/PIC Microcontroller
||A Wikibookian has nominated this page for cleanup because:
Needs better formatting, etc
Manufactured by Microchip, the PIC ("Programmable Intelligent Computer" or "Peripheral Interface Controller" ) microcontroller is popular among engineers and hobbyists alike. PIC microcontrollers come in a variety of "flavors", each with different components and capabilities.
Many types of electronic projects can be constructed easily with the PIC family of microprocessors, among them clocks, very simple video games, robots, servo controllers, and many more. The PIC is a very general purpose microcontroller that can come with many different options, for very reasonable prices.
General Instruments produced a chip called the PIC1650, described as a Programmable Intelligent Computer. This chip is the mother of all PIC chips
, functionally close to the current 16C54. It was intended as a peripheral for their CP1600 microprocessor. Maybe that is why most people think PIC stands for Peripheral Interface Controller. Microchip has never used PIC as an abbreviation, just as PIC. And recently Microchip has started calling its PICs microcontrollers PICmicro MCU's.
Which PIC to Use
How do you find a PIC that is right for you out of nearly 2000 different models of PIC microcontrollers?
The Microchip website has an excellent Product Selector Tool. You simply enter your minimum requirements and optionally desired requirements, and the resulting part numbers are displayed with the basic features listed.
You can buy your PIC processors directly from Microchip Direct, Microchip's online store. Pricing is the same or sometimes better than many distributors.
Rule Number 1: only pick a microprocessor you can actually obtain. PICs are all similar, and therefore you don't need to be too picky about which model to use.
If there is only 1 kind of PIC available in your school storeroom, use it. If you order from a company such as Newark or DigiKey , ignore any part that is "out of stock" -- only order parts that are "in stock". This will save you lots of time in creating your project.
Recommended "first PIC"
At one time, the PIC16F84 was far and away the best PIC for hobbyists. But Microchip, Parallax, and Holtek are now manufacturing many chips that are even better and often even cheaper, because of the higher level of production.
- I'd like a list of the top 4 or so PIC recommendations, and *why* they were recommended, so that when better/cheaper chips become available, it's easy to confirm and add them to the list.
PIC: Select a chip and buy one.
Many people recommend the following PICs as a good choice for the "first PIC" for a hobbyist, take in count the revision numbers (like the A in 16F628A):
- PIC18F4620: it has 13 analog inputs -- Wouter van Ooijen recommends that hobbyists use the largest and most capable chip available, and this is it (as of 2006-01). ~$9
PIC16F877A -- the largest chip of the 16F87x family; has 8 analog inputs -- recommended by Wouter (#2); AmQRP; PICList. ~$8
- PIC16F877A , this is probably the most popular PIC used by the hobbyist community that is still under production. This is the best PIC of its family and used to be "the PIC" for bigger hobbyist projects, along with the PIC16F84 for smaller ones. Features 14KB of program memory, 368 bytes of RAM, a 40 pin package, 2 CPP modules, 8 ADC channels capable of 10-bit each. It also counts with the UART and MSSP, which is a SSP capable of being master, controlling any devices connected to the I2c and SPI busses. The lack of internal oscillator, as opposed to the other PICs mentioned until now, is something to be aware of. Also, this PIC is relatively expensive for the features included. This may be caused by Microchip to force the migration to better chips. --recommended by Ivaneduardo747; Wouter (#2); AmQRP --. ~$9
PIC16F88 -- has 7 analog inputs -- recommended by AmQRP; SparkFun. ~$5
- PIC16F88, this is enhanced version of the PIC16F628A. It has all the features of the 16F628, plus twice the program memory, 7KB; seven 10-bit ADCs, a SSP (Synchronous Serial Port), capable of receiving messages sent over I2C and SPI busses. It also supports self-programming, a feature used by some development boards to avoid the need of using a programmer, saving the cost of buying a programmer. --recommended by Ivaneduardo747; AmQRP -- SparkFun. ~$5
PIC16F628 -- Cheaper than the PIC16F84A, with a built-in 4MHz clock and a UART, but lacks any analog inputs -- recommended by Wouter (#3); AmQRP -- ~$4
- PIC16F628A, this is a good starter PIC because of its compatibility with what used to be one of the hobbyist's favorite PICs: the PIC16F84. This way, the beginner can select from a vast catalog of projects and programs, specially when created in low level languages like the PIC Assembler. It features a 18 pin package, 3.5KB of Flash Memory, can execute up to 5 million instructions per second (MIPS) using a 20MHZ crystal. The lack of an Analog-Digital Converter (ADC) is something to point out. As opposed to the PIC16F84A it has an UART, which is capable of generating and receiving RS-232 signals, which is very useful for debugging. Some people use to find ironic that this chip is cheaper than the less-featured PIC16F84A. -- recommended by Ivaneduardo747; Wouter (#3) AmQRP -- ~$5
- PIC16F1936, a powerful mid-range PIC, comes with an 11 channel, 10-bit ADC; two indirect pointer registers; XLP (extreme low power) for low power consumption on battery powered devices. -- recommended by some people on the PIClist as a faster, better, cheaper replacement for the 16F877. -- ~$3
- PIC12F683, a small 8-pin microcontroller. It is a good microcontroller for small applications due to its small size and relatively high power and diverse features, like 4 ADC channels and internal 4MHZ oscillator. --recommended by Ivaneduardo747; . ~$2.50
Of the many new parts Microchip has introduced since 2003, are any of them significantly better for hobbyists in some way than these chips ? Todo: Does "Starting out PIC Programming: What would be a good PIC chip to start out with?" have any useful recommendations to add to the above?
There are several different "families":
More selection tips
- The "F" Suffix implies that the chip has reprogrammable Flash memory.
PIC10F -- in super-tiny 6 pin packages PIC12F -- in tiny 8-pin packages PIC14F PIC16F PIC18F PIC24F PIC24E PIC24H dsPIC30F dsPIC33F dsPIC33E
- The "C" suffix implies that the chip uses EPROM memory. A few of these chips used to be erased with a very expensive Ultra-Violet eraser. This method was primarily used by companies. But most of these chips are specifically made so that once you write it you can't change it: it's OTP (one-time programmable). People used to check their programs minutely before programming them into such chips. Recently, this chips are becoming less used as the cost of Flash memory decreases, but some of them are still used because of their reliability or reduced costs.
PIC12C PIC16C PIC17C PIC18C
Each family has one "full" member with all the goodies and a subset of variant members that lack one thing or another. For example, on the 16F84 family, the 16F84 was the fully featured PIC, with Flash memory and twice the program space of the 16F83. The family was also composed by the 16C84 and 16C83, one of the few reprogrammable C suffix PICs. For prototyping, we generally use the "full" version to make sure we can get the prototype working at all. During prototyping we want to tweak code, reprogram, and test, over and over until it works. So we use one of the above "Flash" families, not the "OTP" families, unless required. For short production, the C parts are recommended. For very long production lines some PICs with mask-programmed ROMs where used. Now in-factory preprogramming is available from Microchip.
Each member of each family generally comes in several different packages. Hobbyists generally use the plastic dual inline package (often called DIP or PDIP) because it's the easiest to stick in a solderless breadboard and tinker with. (The "wide-DIP" works just as well). They avoid using ceramic dual inline package (CDIP), not because ceramic is bad (it's just as easy to plug into a solderless breadboard), but because the plastic parts work just as well and are much cheaper.
(Later, for mass production, we may figure out which is the cheapest cut-down version that just barely has enough goodies to work, and comes in the cheapest package that has just barely enough pins for this particular application
... perhaps even a OTP chip).
And then each different package, for each member of each family, comes in both a "commercial temperature range" and a "industrial temperature range".
PIC 16x
The PIC 16 family is considered to be a good, general purpose family of PICs. PIC 16s generally have 3 output ports to work with. Here are some models in this family that were once common:
PIC 16C54 - The original PIC model, the 'C54 is available in an 18 pin DIP, with 12 I/O pins. PIC 16C55 - available in a 28-pin DIP package, with 20 available I/O pins PIC 16C56 - Same form-factor as the 'C54, but more features PIC 16C57 - same form-factor as the 'C55, but more features PIC 16C71 - has 4 available ADC, which are mapped to the same pins as Port A (dual-use pins). PIC 16C84 - has the ability to erase and reprogram in-circuit EEPROMs
Many programs written for the PIC16x family are available for free on the Internet.
Flash-based chips such as the PIC16F88 are far more convenient to develop on, and can run code written for the above chips with little or no changes.
PIC 12x
The PIC12x series is the smallest series with 8 pins and up to 6 available I/O pins. These are used when space and/or cost is a factor.
PIC 18x
The PIC 18x series are available in a 28 and 40-pin DIP package. They have more ports, more ADC, etc... PIC 18s are generally considered to be very high-end microcontrollers, and are even sometimes called full-fledged CPUs.
Microchip is currently (as of 2007) producing 6 Flash microcontrollers with a USB interface. All are in the PIC18Fx family. (The 28 pin PIC18F2450, PIC18F2455, PIC18F2550; and the 40/44 pin PIC18F4450, PIC18F4455, PIC18F4550 ).
The PIC Stack
The PIC stack is a dedicated bank of registers (separate from programmer-accessible registers) that can only be used to store return addresses during a function call (or interrupt).
- 12 bit: A PIC microcontroller with a 12 bit core (the first generation of PIC microcontrollers) ( including most PIC10, some PIC12, a few PIC16 ) only has 2 registers in its hardware stack. Subroutines in a 12-bit PIC program may only be nested 2 deep, before the stack overflows, and data is lost. People who program 12 bit PICs spend a lot of effort working around this limitation. (These people are forced to rely heavily on techniques that avoid using the hardware stack. For example, macros, state machines, and software stacks).
- 14 bit: A PIC microcontroller with a 14 bit core (most PIC16) has 8 registers in the hardware stack. This makes function calls much easier to use, even though people who program them should be aware of some remaining gotchas .
- 16 bit: A PIC microcontroller with a 16 bit core (all PIC18) has a "31-level deep" hardware stack depth. This is more than deep enough for most programs people write.
Many algorithms involving pushing data to, then later pulling data from, some sort of stack. People who program such algorithms on the PIC must use a separate software stack for data (reminiscent of Forth). (People who use other microprocessors often share a single stack for both subroutine return addresses and this "stack data").
Call-tree analysis can be used to find the deepest possible subroutine nesting used by a program. (Unless the program uses w:recursion). As long as the deepest possible nesting of the "main" program, plus the deepest possible nesting of the interrupt routines, give a total sum less than the size of the stack of the microcontroller it runs on, then everything works fine. Some compilers automatically do such call-tree analysis, and if the hardware stack is insufficient, the compiler automatically switches over to using a "software stack". Assembly-language programmers are forced to do such analysis by hand.
What else do you need
Compilers, Assemblers
Versions of BASIC, C, Forth, and a few other programming languages are available for PICmicros. See Embedded Systems/PIC Programming.
You need a device called a "downloader" to transfer compiled programs from your PC and burn them into the microcontroller. (Unfortunately "programming" has 2 meanings -- see Embedded_Systems/Terminology#programming).)
There are 2 styles of downloaders. If you have your PIC in your system and you want to change the software,
- with a "IC programmer" style device, you must pull out the PIC, plug it into the "IC programmer", reprogram, then put the PIC back in your system.
- with a "in circuit programmer" style device (ICSP), you don't touch the PIC itself -- you plug a cable from the programmer directly into a header that you have (hopefully) placed next to the PIC, reprogram, then unplug the cable.
An (incomplete) list of programmers includes:
- BobProg - Simple ICSP programmer with external power supply
- JDM Programmer modified for LVP micrcontrollers
- In Circuit Programmer for PIC16F84 PIC16F84 Programmer
- IC Programmer ICProg Programs : 12Cxx, 16Cxxx, 16Fxx, 16F87x, 18Fxxx, 16F7x, 24Cxx, 93Cxx, 90Sxxx, 59Cxx, 89Cx051, 89S53, 250x0, PIC, AVR , 80C51 etc.
- Many other programmers are listed at MassMind.
Many people prefer to use a "bootloader" for programming whenever possible. Bootloaders are covered in detail in chapter Bootloaders and Bootsectors .
Power Supply
The most important part of any electronic circuit is the power supply. The PIC programmer requires a +5 volt and a +13 volt regulated power supply. The need for two power supplies is due to the different programming algorithms:
- High Power Programming Mode - In this mode, we enter the programming mode of the PIC by driving the RB7(Data) and RB6(CLOCK) pins of the PIC low while driving the MCLR pin from 0 to VCC(+13v).
- Low Power Programming Mode - This alogrithm requires only +5v for the programming operation. In this algorithm, we drive RB3(PGM) from VDD to GND to enter the programming mode and then set MCLR to VDD(+5v).
Pin Diagrams
Oscillator Circuits
The PIC microcontrollers all have built-in RC oscillator circuits available, although they are slow, and have high granularity. External oscillator circuits may be applied as well, up to a maximum frequency of 20MHz. PIC instructions require 4 clock cycles for each machine instruction cycle, and therefore can run at a maximum effective rate of 5MHz. However, certain PICs have a PLL (phase locked loop) multiplier built in. The user can enable the Times 4 multiplier, thus yielding a virtual oscillator frequency of 4 X External Oscillator. For example, with a maximum allowable oscillator of 16MHz, the virtual oscillator runs at 64MHz. Thus, the PIC will perform 64 / 4 = 16 MIPS (million instructions per second). Certain pics also have built-in oscillators, usually 4Mhz for precisely 1MIPS, or a low-power imprecise 48kHz. This frees up to two I/O pins for other purposes. The pins can also be used to produce a frequency if you want to synchronize other hardware to the same clock as one PIC's internal one.
Continue with Embedded Systems/PIC Programming.
Further reading
There is a lot of information about using PIC microcontrollers (and electronics design in general) in the PICList archives. If you are really stumped, you might consider subscribing to the PICList, asking your question ... and answering someone else's question in return. The PICList archives are hosted at MassMind
- A Guide To PIC Microcontroller Documentation goes into more detail.
- RC Airplane/RCAP discusses a project that uses a PIC16F876A.
- the Parallax SX FAQ by Guenther Daubach
- Microchip PIC: the original manufacturer's web site
- Getting Starting with PICmicro controllers by Wouter van Ooijen
- "The PIC 16F628A: Why the PIC 16F84 is now obsolete."
- "The PIC 16F88: Why the PIC 16F84 is now Really obsolete."
- "Free PIC resources and projects with descriptions, schematics and source code."
- "Programming PICmicros in the C programming language"
- "Programming PICmicros in other programming languages: Forth, JAL, BASIC, Python, etc."
- The "8-bit PIC® Microcontroller Solutions brochure" describes how big the PIC hardware stack is in each PIC microcontroller family, and other major differences between families.
- Micro&Robot - 877: robot kit with self-programmable PIC Microcontroller! You don't need a PIC programmer.
- Programming the PIC16f628a with SDCC: An occasionally-updated list of examples demonstrating how to use the PIC's peripherals and interface with other devices with the free SDCC pic compiler. | 1 | 3 |
<urn:uuid:f1779abb-d1ac-4950-b790-c320e6291908> | BME103:T130 Group 17
|BME 103 Fall 2012|| Home |
Lab Write-Up 1
Lab Write-Up 2
Lab Write-Up 3
Course Logistics For Instructors
Wiki Editing Help
LAB 1 WRITE-UP
Initial Machine Testing
Experimenting With the Connections
When the PCB board of the LCD screen was disconnected from the PCB circuit board the display output was turned off.
When the white wire connecting the 16 tube PCR block to the PCB circuit board ability to regulate the temperature of the PCR was lost.
October 25, 2012
After finishing the diagnostic analysis, the PCR was tested by setting thermal cycler program to three stages. Stage one was one cycle of 95 °C for 3 minutes, the second stage was 35 cycles, 95 °C for 30 seconds, 50 °C for 30 seconds, 72 °C for 30 seconds, stage three was one cycle of 72 °C for 3 minutes. The test run lasted for about an hour and thirty minutes and confirmed that the temperature readings on the LED of the PCR machine and the computer matched.
Polymerase Chain Reaction
The PCR replicated the wanted DNA fragments from the patient. The PCR will heat up to 95 degree Celsius and cool down to 50°C. Then heat back up to 72°C within one cycle. Over all there will be 30 cycles. At the end of the 30 cycles we have over a billion of the wanted fragments, and 60 unwanted DNA molecule strands in the solution.
Step by Step instruction to amplify the Patient's DNA Sample:
1. Need to extract the DNA from the patient.
2. Put the DNA into a special PCR tube.
3. Add primer #1 to the PCR tube with the DNA.
4. Add primer #2 to the PCR tube with DNA.
5. Add Nucleotides (the A,C,T,and G).
6. Add the DNA polymerase to the PCR tube.
7. Place the PCR tube into the thermal cycler.
8. Set the temperature of the thermal cycler to 95°C and set the machine to run 30 cycles.
9. Now the thermal cycler cools down to 50°C and primer #1 and #2 attach to the single strands of DNA.
10. Now the thermal cycler temperature changes to 72°C. This begins the DNA polymerase. This pairs the DNA with its complimentary nucleotide through to the end of the DNA strand.
11. Repeat step 8-10 29 more times.
12. During cycle #3 the wanted DNA begins to appear.
13. The wanted piece of the DNA begins to double.
14. After 30 cycles are complete over a billion wanted DNA fragments will show in the DNA solution and there will be 60 copies of unwanted DNA molecules in the solution.
2. Taq DNA polymerase that lacks 5'--> 3'
4. Reaction buffers
Reagent and Volume Table
Description of samples:
Image 1: 3 drops of sybrgreen, 2 drops of calibrator solution. Dot was all blue not green. This implies the solution is negative for cancer.
Image 2: 4 drops of sybrgreen, 2 drops of water solution. Dot was all blue no green. This implies the solution is negative for cancer.
Image 3: 3 drops of sybrgreen, 2 drops of patient 1 solution A. Dot has some slight blurrs of gree. This implies the DNA solution is positive for cancer.
Image 4: 2 drops of sybrgreen, 2 drops patient 1 solution B. Dot was all blue with no sight of green. This implies the DNA solution is negative for cancer.
Image 5: 3 drops of sybrgreen, 2 drops of patient 1 solution C. Dot was all blue with no sight of green. This implies the DNA solution is negative for cancer.
Image 6: 3 drops of sybrgreen, 2 drops of patient 1 solution D. Dot was all blue with no sight of green. This implies the DNA solution is negative for cancer.
Image 7: 3 drops of sybrgreen, 2 drops of patient 2 solution A2. Dot was all blue with no sight of green. This implies the DNA solution is negative for cancer.
Image 8: 3 drops of sybrgreen, 2 drops of patient 2 solution B2. Dot has a spec of bright green. This implies the DNA solution is positive for cancer.
Image 9: 4 drops of sybrgreen, 2 drops of patient 2 solution C2. Dot had some spattered green. This implies the DNA solution is positive for cancer.
Image 10: 5 drops of sybrgreen, 2 drops of patient 2 solution D2. Dot had a really green center. This implies the DNA solution is positive for cancer.
Patient 1: 74065, Male, Age:63
Patient 2: 64835, Male, Age:46
1. Turn on the blue light in the Flourimeter using the switch for the Blue LED.
2. Place the smart phone accordingly so that the super-hydrophobic slide is in front of the smart phone.
3. Turn on the camera on the smart phone. **TURN OFF THE FLASH** Set the ISO to 800 or higher. Increase the exposure to maximum.
4. Set the distance between the smart phone and the machine so that the smart phone can take a clear picture of the droplet.
5. First label the blank pipettes according to the patients (A,B,C,D... all eight of them). The pipettes given by the instructor are color coded. The white coded pipette is used for water. the red coded pipette is used for the calibrator (the tube with the red dot). The blue coded pipette is used for the sybrgreen (the tube with the blue dot). The black coded pipette is used to pick up the waste and put it in the cup that collects the waste droplets.
6. First calibrate the machine (to make sure the machine works). Put two droplets of the cyber green on the first two dots in the middle. If the droplets are not connected then add a third droplet that combines the two droplets.
7. Then put two drops of calibrator solution with the calibrator solution. Then set the smart phone accordingly to take a clear picture of the droplet. Then put the black box on top of the phone and the machine so that when the the light is completely blocked and the shade of blue of green is shown when the picture is taken.
8. Record observations.
9. Remove the solution from the glass dish using the black pipette and discard it in the plastic cup given for the waste droplets.
10.Repeat step 6-9, but instead of adding calibrator solution, add 2 droplets of water.
11.Repeat steps 6-9 for patient 1 solutions (A,B,C,D) and patient 2 solutions (A2.B2,C2,D2).
Research and Development
Specific Cancer Marker Detection - The Underlying Technology
The r17879961 sequence will produce a cancer mutation at Chromosomes 22 of the gene sequence. The normal sequence has a T ( thymine) nucleotide at chromosome 22 while the mutation sequence has an associated C (cytosine) nucleotide. The Open PCR machine is able to determine whether or not the r17879961 sample has cancer by replicating the desired mutation exponentially. Positive and negative strands are inserted into the PCR with a certain primer. The primer in the reaction is designed to attach to the C nucleotide that signifies cancer mutation. One strand has the primer, while the other strand does not. Open PCR will replicate the strand with the certain primer, causing an exponential growth. The negative strand will grow in a linear fashion. The PCR process goes through 30 cycles to complete this. After the PCR process, fluorescent dye is added to the solutions. The fluorescent dye will cause the DNA with double strands to glow. Since the PCR has grown the double stranded positive DNA exponentially the fluorescent dye glows brighter. Therefore the cancer DNA is in the sample with the glow.
Source: Genetic Science Learning Center (2012, August 6) PCR Virtual Lab. Learn.Genetics. Retrieved November 8, 2012, from http://learn.genetics.utah.edu/content/labs/pcr/ | 1 | 3 |
<urn:uuid:6315ddb6-3fb6-4ac4-80ea-ab23c1855281> | IntroductionA little bit more than a month ago, AnandTech published "No more mysteries: Apple's G5 versus x86, Mac OS X versus Linux" with the ambitious goal of finding out how the Apple platform compares, performance-wise, to the x86 PC platform. The objective was to find out how much faster or slower the Apple machines were compared to their PC alternatives in a variety of server and workstation applications.
Some of the results were very surprising and caught the attention of millions of AnandTech readers. We found out that the Apple platform was a winner when it came to workstation applications, but there were serious performance problems when you run server applications such as MySQL (Relational Database) or Apache (Webserver). The MySQL database running on Mac OS X and the Dual G5 was up to 10 times slower than on the Dual Opteron running Linux.
We suspected that Mac OS X was to blame as low level OS benchmarks (Lmbench 3.0) indicated low OS performance. The whole article was a first attempt to understand better how the Apple platform - Mac OS X + G5 - performs, and as always, first attempts are never completely successful or accurate. As we found more and more indications that the OS, not the CPU, was the one to blame, it became obvious that we should give more proof for our "Mac OS X has a weak spot" theory by testing both the Apple and x86 machines with Linux. My email was simply flooded with hundreds of requests for some Linux Mac testing...even a month after publication!
That is what we'll be doing in this article: we will shed more light on the whole Apple versus x86 PC, IBM G5 versus Intel CPU discussion by showing you what the G5 is capable of when running Linux. This gives us insight on the strength and weakness of Mac OS X, as we compare Linux and Mac OS X on the same machine.
The article won't answer all the questions that the first one had unintentionally created. As we told you in the previous article, Apple pointed out that Oracle and Sybase should fare better than MySQL on the Xserve platform. We will postpone the more in-depth database testing (including Oracle) to a later point in time, when we can test the new Apple Intel platform.
Why Bother?Why do we bother, now that Apple has announced clearly that the next generation of the Apple machines will be based on Intel? Well, this makes our research even more interesting. As you will see further in the article, the G5 is not the reason why we saw terrible, slow performance. In fact, we found that the IBM PowerPC 970FX, a.k.a. "G5", has a few compelling advantages.
As Apple moves to Intel, the only thing that makes Apple unique, and not yet another x86 PC OEM, is Mac OS X. That is why Apple will attempt to prevent you from running an x86 version of Mac OS X on anything else but their own hardware (using various protection schemes), as Anand reported in "Apple's Move to x86: More Questions Answered". Mac OS X will be the main reason why a consumer will choose an Apple machine instead of a Dell one. So, as we get to know the strengths and weaknesses about this complex but unique OS, we'll get insight into the kind of consumers who would own an Intel based machine with Mac OS X - besides the people who are in love with Apple's gorgeous cases of course....
We also gain insight into the real reasons behind the move to Intel, and what impact it will have for the IT professional. Positive but very vague statements about the move to the Intel architecture have already been preached to the Apple community. For example, it was reported that the "Speed of Apple Intel dev systems impress developers". Proudly, it was announced that the current Apple Intel dev systems - based on a 3.6 GHz Intel Pentium 4 with 2 MB L2 Cache - were faster than a dual G5 2 GHz Mac. That is very ironic for three reasons.
Firstly, Apple's own website contradicts this in every tone. Secondly, we found a 2.5 GHz G5 to perform more or less like a Pentium 4 3 - 3.2 GHz in integer tasks. So, a 2 GHz G5 is probably around the speed of a 2.6 GHz Pentium 4. It is only natural that a much faster single CPU with a better disk and memory system outpaces a slower dual CPU in single threaded booting and development tasks. Thirdly, the whole CPU industry is focused now on convincing the consumers of how much better multi-core CPUs are compared to their "old" single core brethren. | 1 | 7 |
<urn:uuid:9a79e53b-6c9b-45c8-aff3-ac836d73ca15> | ABO TYPING (blood)
ABO typing is done to determine your blood type. Are you A, B, AB or O? It is done as part of a blood transfusion, organ or bone marrow donation/transplant, or paternity test. For people who simply want to know their own blood type, we suggest discussing with your physician who may give you a test requisition, but there is a fee that you’ll have to pay since it is not covered by the BC medical plan.
ALBUMIN (blood, urine)
Albumin is a protein comprising half of all the protein in the serum. Albumin regulates the pressure in tissues (oncotic pressure), serves as a nutritional source, and carries toxins and metabolites in the blood stream. It is low in severe nutritional deficiency, chronic liver disease, and nephrotic syndrome.
ALCOHOL (blood, urine)
Alcohol (ethanol, ethyl alcohol) can be detected and quantified in body fluids. Following alcohol consumption, blood levels decline at 150 mg/L/h. Alcohol cannot be reliably detected in the urine beyond about 6-8 hrs.
ALDOSTERONE (blood, urine)
Aldosterone is a substance manufactured by the adrenal glands that controls salt balance. A tumor in the adrenal gland can produce aldosterone in excess and this will cause high blood pressure. For more information, see the section on Adrenal Disease.
ALKALINE PHOSPHATASE (blood)
Alkaline Phosphatase (ALP) is an enzyme produced by the liver and bone. ALP activity is increased in bone disorders and obstructive (cholestatic) liver disease.
ALPHA - 1 ANTITRYPSIN (blood)
Alpha-1 Antitrypsin is an enzyme that inactivates the protein-degrading enzyme trypsin and prevents it from damaging tissues. A genetic defect (A1AT deficiency) causes liver disease and lung disease.
ALANINE AMINOTRANSFERASE (blood)
Alanine Aminotransferase (ALT) is a liver enzyme that is increased in inflammation of the liver. Minor increases are seen with “flu” and major increases (500-3000) are seen in hepatitis.
ALPHA THALASSEMIA TESTING (by PCR)
Detection of major deletions and point mutations associated with alpha thalassemia using molecular technologies. The method detects the 3.7, 4.2, SEA, MED, FIL, THAI, 20.5, Quong Sze and Constant Spring mutations.
AMYLASE (blood, urine)
Amylase is a digestive enzyme produced by the pancreas. In a disorder known as pancreatitis, the pancreas becomes inflamed and causes severe abdominal pain as well as elevated levels of serum and urine Amylase.
Androstenedione is an androgen (male hormone) produced by the adrenal gland (and to some degree by the ovary). It is a useful test for evaluation of hyperandrogenism, CAH and premature adrenarche.
ANGIOTENSIN CONVERTING ENZYME (blood)
Angiotensin Converting Enzyme (ACE) is used to help diagnose and treat Sarcoidosis (a lung condition). It can be abnormal in other lung diseases.
ANTIBIOTIC SUSCEPTIBILITY TEST
If bacteria are found when a swab is cultured, a sample of the bacteria can be grown on a separate culture plate. Small discs (like confetti) that have been soaked with various antibiotics are also placed on the culture plate. The lab can then tell which antibiotics are most effective against the bacteria.
ANTICARDIOLIPIN ANTIBODY (blood)
The presence of antibodies to cardiolipin, a component of cell membranes, may lead to excessive blood clotting. This can cause strokes, blocked blood vessels, and intrauterine fetal death.
ANTI-DNA ANTIBODY (blood)
Anti-DNA is an autoantibody (antibody produced against the patient’s own tissue) whose presence is strong evidence for Systemic Lupus Erythematosus (SLE). A negative test does not rule out this diagnosis. For more information, see the section on Lupus.
APOLIPOPROTEIN A-1 (blood)
Apolipoprotein A-1 is the protein associated with HDL-Cholesterol (“good cholesterol”) particles. Like HDL, low values are one of the most reliable predictors of Coronary Atherosclerotic Disease (CAD).
APOLIPOPROTEIN B–100 (blood)
Apolipoprotein B-100 is the major protein that makes up all of the lipoproteins other than HDL. Apolipoprotein B-100 is the major constituent of LDL-Cholesterol. Increased levels are associated with a higher risk of Coronary Atherosclerotic Disease (CAD). It is typically measured with a standard lipid panel only when the triglyceride level is high.
ASPARTATE AMINOTRANSFERASE (blood)
Aspartate Aminotransferase (AST) is a liver enzyme that is increased in inflammation of the liver. Minor increases are seen with “flu” and major increases (500-3000) are seen in hepatitis. As the test is not very specific, follow-up is typically not recommended unless values exceed 1.5 times the upper limit of normal.
BETA 2 MICROGLOBULIN (blood)
Beta2-microglobulin (BMG) is a low molecular weight protein related to the immunoglobulins. It is increased in various malignant and immune disorders where it is produced proportionally to the tumour load, disease activity and prognosis. It is also increased in chronic inflammation, liver disease, acute viral disease, renal failure, and chronic cadmium exposure (a good industrial marker) – urine values increase. BMG can be used in assessing renal function, particularly in kidney-transplant recipients and in patients suspected of having renal tubular disease. It also can serve as a nonspecific but relatively sensitive marker of various neoplastic, inflammatory, and infectious conditions. Early hopes that it would be a useful serum test for malignancy have not been fulfilled, but it does have prognostic value for patients with lymphoproliferative disease, particularly multiple myeloma. More recent reports have suggested a role for BMG as a prognostic marker in human immunodeficiency virus (HIV) infection.
BETA NATRIURETIC PEPTIDE (blood)
Beta natriuretic peptide (BNP) is used to evaluate cardiac function in the investigation of heart failure. LifeLabs performs the BNP test only, not NT-proBNP.
Bicarbonate is the main acid buffer in the blood stream. Low values occur in metabolic acidosis and respiratory alkalosis. High values occur in metabolic alkalosis and respiratory acidosis.
BILIRUBIN, DIRECT/CONJUGATED (blood)
Bilirubin is the end result of hemoglobin breakdown and is removed from the body by the liver. Increased bilirubin comes from liver disease or from increased hemoglobin breakdown. Bilirubin is then chemically modified in the liver to the direct (or conjugated) form that is more readily eliminated by the body. Thus, an increase in the direct/conjugated fraction indicates an obstructive liver disorder. See Bilirubin (Total) below.
BILIRUBIN, TOTAL (blood)
Bilirubin is the end result of hemoglobin breakdown and is removed from the body by the liver. Increased bilirubin comes from liver disease or from increased hemoglobin breakdown. Bilirubin is increased in the serum in the following disorders:
•hemolysis i.e. excessive breakdown of red cells
•obstructive liver disease (bile duct obstruction or cholestasis), where the bilirubin is conjugated in the liver but is not removed. Thus the conjugated or “direct” bilirubin is raised.
•hepatocellular disease e.g. hepatitis, alcoholism
To assess the ability of the body to control bleeding, a test is performed by cutting the skin and timing how long it takes to stop bleeding. This is now an obsolete test.
To determine if bacteria are growing in someone’s blood stream, a sample of blood is removed and placed in a bottle of nutrients. It can then be examined at intervals to see if bacteria have grown.
C - REACTIVE PROTEIN (blood)
C-Reactive Protein (CRP) is a protein that increases in response to inflammation. Recent evidence has suggested that the high sensitivity CRP test (hsCRP) may be useful in predicting heart attacks. All samples referred to LifeLabs for CRP are analyzed using an hsCRP method. For more information, see the section on Lipid Testing.
CA 125 (blood)
CA 125 is a tumour marker of ovarian cancer. The CA-125 molecule is widely distributed on the surface of both healthy and malignant cells of mesothelial origin including pleural, pericardial, peritoneal and endometrial cells, as well as in the normal genital tract and amnionic membrane. It is not a screening test for ovarian cancer; rather, it is used to monitor ovarian cancer following treatment. CA-125 is increased in many patients with ovarian cancer; the proportion rises with cancer stage. Values > 35 U/L are predictive of intra peritoneal tumor or recurrence. Benign conditions associated with CA-125 increase includes: menstruation, pregnancy, benign pelvic tumors, pelvic inflammatory diseases, ovarian hyperstimulation syndrome, endometriosis, peritonitis and many diseases leading to pleural effusion or ascites. CA-125 values of up to 5000 unit/mL have been reported in some benign conditions.
CA 15-3 (blood)
CA 15-3 is a protein (also known as Breast Cancer Mucin) that is produced by many cancers of the breast. It is not a screening test for breast cancer; rather, it is used as a follow-up after breast cancer has been treated. This may allow a relapse to be detected before becoming clinically apparent.
CA 19-9 (blood)
CA 19-9 is a tumour marker of gastrointestinal malignancies such as cholangiocarcinoma, pancreatic cancer and colon cancer. It is not a screening test; rather, it is used as a follow-up after the initially diagnosed tumour has been treated. This may allow a relapse to be detected before becoming clinically apparent.
CALCIUM (blood, urine)
Serum calcium levels are held within tight limits in the blood by a complex sequence of hormones. Increases in serum calcium are seen with: parathyroid adenoma, pheochromocytoma, lymphoma, parathyroid carcinoma, tertiary hyperparathyroidism (chronic renal failure), PTH secreting tumours, metastatic carcinoma to bone (breast, lung, kidney, lymphoma, leukemia), vitamin D overdosage, multiple myeloma, tumour producing a hormone that acts like PTH but does not cross react in the assay (parathyroid hormone related peptide (PTHrP)), sarcoidosis, milk alkali syndrome, Paget’s disease, thyrotoxicosis, acromegaly, and acute tubular necrosis.
Some reasons for a low calcium are low albumin, decreased PTH, PTH insensitivity (pseudohypoparathyroidism), vitamin D deficiency (due to diet or malabsorption), low dietary calcium, calcium malabsorption, chronic renal failure, magnesium deficiency, anticonvulsant therapy, acute pancreatitis, massive blood transfusion, osteomalacia, renal disease, liver disease, diuretics, increased serum phosphate (e.g. due to renal insufficiency) and hyperadrenalism.
The urine calcium is measured to determine if the body is losing more calcium than normal. High urine calcium also poses a risk for calcium kidney stones.
CAMPYLOBACTER (by PCR)
Campylobacter species are the most common cause of bacterial diarrhea, and also cause abdominal pain, fever, headaches and sometimes vomiting. Molecular methods provide rapid sensitive identification of the most clinically important group of Campylobacter species.
Carbamazepine (Tegretol) is a drug used to treat seizures. Decreased values are seen with Phenobarbital and Phenytoin, both of which increase carbamazepine metabolism. Measurement is carried out to adjust the dosage. For more information, see the section on Therapeutic Drug Monitoring.
A copper-containing protein, ceruloplasmin is markedly decreased in Wilson’s disease, a genetic condition in which copper deposits in the liver, brain, and eye. This test is most helpful in younger patients with otherwise unexplained liver disease.
Cholesterol is a blood lipid whose levels correlate with the risk of heart and artery disease. There are two principal forms of cholesterol: low-density lipoprotein (LDL) and high-density lipoprotein (HDL). HDL is the “good” cholesterol removed from blood vessels while LDL represents the “bad” cholesterol most directly related to the risk of heart disease. HDL cholesterol and total cholesterol are measured directly but LDL cholesterol is obtained by calculation. In BC, Cholesterol testing is governed by a Protocol developed by the Medical Services Commission and the BCMA. For more information, see the section on Cholesterol.
Chlamydia is a microorganism species that causes a variety of diseases. Most often it is studied in genital cultures or urine, as it is a common sexually transmitted disease (STD).
CHLORIDE (blood, urine)
One of the principal electrolytes in the blood stream, chloride is an ion whose negative charge balances positively charged sodium and potassium.
There are two types of cholinesterase test. The form found in red blood cells is used to detect exposure to organophosphorus insecticides, but is not currently available in BC. The serum cholinesterase test is used to determine whether a patient can break down the muscle paralyzing drugs that are used in anesthetics.
Citrate is a substance excreted in the urine. Low urine citrate concentrations increase the risk of kidney stones.
Sometimes when antibiotics are taken orally they disrupt the normal bacteria in the bowel and allow bacteria known as Clostridium difficile to grow. C. difficile produces a toxin that can cause diarrhea.
C. DIFFICILE (by PCR)
Molecular testing for C. difficile allows rapid and simultaneous identification and detection of genes for toxin A and toxin B, both associated with diarrhea.
Clozapine (Clozaril®), a tricyclic dibenzodiazepine, is used for the symptomatic management of psychotic disorders and is considered an atypical antipsychotic drug. It is currently used primarily for the treatment of patients with schizophrenia or schizoaffective disorders who are at risk for recurrent suicidal behavior and who have encountered nonresponse or adverse, intolerable extra-pyramidal side effects with more classical antipsychotics (chlorpromazine, haloperidol). For more information, see the section on Therapeutic Drug Monitoring.
COLD AGGLUTININS (blood)
A number of diseases result in the production of IgM antibodies (occasionally IgG) that coat red blood cells. They are so called because their optimal action in causing red cell damage is at temperatures lower than 37°C. The test is used in the diagnosis of Hemolytic anemia. Cold agglutinins are found in a number of disorders: Idiopathic (unknown causes) 25%, drug induced (15%), neoplasia 15% (lymphoma, leukemia, carcinoma), infections 10% (mycoplasma, viral pneumonia, infectious mononucleosis), collagen diseases 2% (SLE, Rheumatoid arthritis), and other 5%. cold hemaglutinin disease (IgM mediated) usually occurs in persons over 50 y (women more common than men). Patients may present with acrocyanosis, Raynaud’s, or even hemolysis following exposure to the cold.
COMPLEMENT C3 and C4 (blood)
The complement system consists of an interacting group of circulating proteins that mediate the inflammatory response and help to destroy foreign particulate matter (particularly microorganisms and viruses). The complement proteins circulate as inactive precursors that become activated in a precise sequence that allows mediation of their collective response.
C3 is one of the “routine” complement tests. Patients with homozygous C3 deficiency have an increased incidence of bacterial infections. A decrease in C3 is associated with complement fixing disorders: acute glomerulnephritis, membranoproliferative glomerulonephritis, immune complex disease, and active SLE. C4 is part of the classical pathway of complement. C4 measurements are used with C3 to determine which pathways have been activated. Value is low in: SLE, early glomerulonephritis, immune complex disease, cryoglobulinemia, and Hereditary Angioneurotic Edema.
COPPER (blood, urine)
Copper in serum is measured to confirm a diagnosis of Wilson Disease, though ceruloplasmin or 24h urine copper are more helpful (see discussion on Ceruloplasmin).
CORTISOL (blood, urine)
Cortisol is the main product of the adrenal gland. Excess cortisol production is called Cushing’s Syndrome. Inadequate cortisol production is called Addison’s syndrome. Increased values are seen in adrenal tumour, pituitary tumour, stress, depression, hypoglycemia, and some drugs (hydrocortisone, methylprednisolone). Decreased values are seen in adrenal failure, pituitary insufficiency, and some drugs (prednisone, dexamethasone). For more information, see the section on Adrenal Disease.
CREATINE KINASE (blood)
Creatine kinase (CK) is a muscle enzyme that is released into serum in any form of muscle disease or trauma. Levels are particularly high in myositis / dermatomyositis and JRA. Increases are also seen in myocardial necrosis (heart attack), strenuous physical exercise, surgery, burns, alcoholic myopathy, alcoholic withdrawal syndrome, low potassium levels (hypokalemia), hypothyroidism, renal failure, obstructive lung disease, pneumonia, and infections. Binding of immunoglobulins to CK (“macro-CK”) may result in falsely elevated results.
CK-MB is a form found primarily in heart muscle, from which it is released following a heart attack. The test for CK-MB, though, has been largely replaced by cardiac Troponin, which is both more sensitive and specific. The CK-MB index, reported along with CK-MB, is defined as the percentage of CK-MB activity relative to total CK.
CREATININE (blood, urine)
Creatinine is a waste product of muscle metabolism. It is produced by the body in rough proportion to the amount of muscle present. It is then removed by the kidney. Increases in serum creatinine are due to increased production (rapid muscle breakdown, fever, burns, trauma) or decreased removal (kidney failure).
Urine creatinine is measured to check for a complete 24hr collection or, for random collections, to correct for normal urine dilution.
A peculiar form of fibrinogen (a clotting protein) that coagulates when the blood drops below 37 degrees C.
Cryoglobulins are protein complexes which precipitate at temperatures below normal body temperature (usually 4° C). Patients may suffer from cold induced precipitation of protein in small, peripheral blood vessels causing vascular purpura, bleeding, urticaria, Raynaud’s, pain and cyanosis. Some patients have Essential Mixed Cryoglobinemia that presents with purpura, arthralgia, weakness, lymphadenopathy, hepatosplenomegaly, and adrenal failure.
C-telopeptide (CTX) is a degradation product of collagen. The test is used for bone fracture risk assessment, osteoporosis medication monitoring and prediction of risk for osteonecrosis of the jaw. At present, the test is not publicly funded in BC.
CTX patient brochure; CTX physician brochure
CULTURE FOR DERMATOPHYTE
Scrapings from skin lesions are grown on gelatin media to identify any skin fungi that may be present.
Cyclosporine A is a drug used in organ transplant patients that reduces the “rejection” of the transplant. It acts by interfering with interleukin-2 (a growth factor for T-lymphocytes). It is measured to keep the cyclosporine level in the therapeutic range but below toxic levels. For more information, see the section on Therapeutic Drug Monitoring.
D-dimer is a fibrin degradation product, a small protein fragment present in the blood after a blood clot is degraded by fibrinolysis. D-dimer concentration is determined by a blood test to help diagnose thrombosis. Since its introduction in the 1990s, it has become an important test for patients suspected of thrombotic disorders, eg. deep venous thrombosis (DVT) or pulmonary embolism (PE). In patients suspected of disseminated intravascular coagulation (DIC), D-dimers may aid in the diagnosis. While a negative result practically rules out thrombosis, a positive result can indicate thrombosis but does not rule out other potential aetiologies. Its main use is to exclude thromboembolic disease where the probability is low.
DEOXYPYRIDINOLINE CROSSLINKS (urine)
Deoxypyridinoline crosslinks are the waste products of normal bone degradation. The amount found in the urine is directly related to the rate at which bone is being degraded. For more information, see the section on Osteoporosis.
DHEAS (Dehydroepiandrosterone sulfate) is the main metabolite of DHEA, the principal androgen (‘male hormone’) of the adrenal gland. Its measurement is a good indicator of whether the adrenal gland is producing too much androgen. DHEA has gained popularity (though not in the medical community) as a substance that may increase muscle strength and general energy and vigor.
A heart drug that helps the contraction of the heart muscle. There is very little difference between the blood concentration where therapy is optimal and where toxicity can occur. For more information, see the section on Therapeutic Drug Monitoring.
DRUG SCREEN (urine)
A group of tests carried out to detect the presence of drugs of abuse (cocaine, heroin, morphine, amphetamines, cannabis, etc.). These tests are intended to detect as many compounds within one group as possible: they do not identify which compound is present. This information is provided by a Confirmation test, often referred to as ‘GC/MS’ (the name of the technique used). Note that these tests are not suitable for medicolegal purposes unless prior arrangements have been made with LifeLabs.
*) Drug either non-reactive or typically at levels below test cutoff for positive.
Detection Window (d)
Related Drugs Not Detected
BC MSP Metha-done Panel
BC MSP Medical Drug Screen
MDA (‘Ecstasy’ metabolite)
1 – 3
3 – 5
(to 42d for chronic)
Up to 30
Benzoylecgonine (cocaine metabolite)
3 - 5
EDDP (methadone metabolite)
6 – 7
2 – 3
More information about drug testing at LifeLabs
ENTERIC PATHOGENS (by PCR)
Enteric pathogens are a group of microorganisms such as Salmonella or E. coli that cause a variety of gastrointestinal symptoms such as diarrhea. Molecular methods allow rapid and simultaneous identification of major enteric pathogens or the detection of genes for toxigenic agents associated with diarrhea.
EXTRACTABLE NUCLEAR ANTIGENS (blood)
In certain autoimmune diseases the body forms antibodies to proteins found in the nucleus. The extractable nuclear antigens (ENA) test consists of an examination for auto-antibodies to proteins known as Sm, RNP, SS-A, SS-B, Scl 70 and Jo-1. The interpretation of this panel is outlined below:
- Sm A: confirmatory test for SLE if the patient has an increased ANA and aDNA. Often heralds CNS or renal complications).
- RNP: positive in 35 – 40 % of persons with SLE. Also + in Mixed Connective Tissue Disease (if RNP is the only antibody positive then renal complications are less likely).
- SS-A (Ro): positive in 25% of SLE patients. Positive in 40% of Sjogren’s Syndrome. Also positive in Antibody negative SLE, in infants with congenital complete heart block, and in neonatal lupus.
- SS-B (La): positive in 10 – 15% of SLE and 10 – 15% Sjogren’s disease, also positive in antibody negative SLE and subacute SLE.
- Scl-70: positive in scleroderma and other connective tissue diseases.
- Jo-1: positive result for Jo 1 antibodies is consistent with the diagnosis of polymyositis and suggests an increased risk of pulmonary involvement with fibrosis in such patients.
For more information, see the section on Lupus.
ESR, WESTERGREN (blood)
Blood is placed in a long tube that is held vertically for one hour. The red cells will fall (or sediment). The height of the sedimented cells correlates with non-specific inflammation.
17β-Estradiol is the dominant form of estrogen in the human body. It is normally produced by the ovary, the adrenal gland (minor), and by peripheral conversion of adrenal androgens. It is measured to determine excessive estrogen in men and children. It is not as useful in determining reduced estrogen in women because the female reference range is very wide and reductions may remain in the normal range. Increased values are seen in children with: precocious puberty, estrogen producing tumours, and ingestion of exogenous estrogen. Male adults have increased values in estrogen producing tumours, gynecomastia, cirrhosis of the liver (metabolic breakdown imbalance), feminizing testes syndrome, estrogen medications, and spironolactone medication. Elevated values in female adults are difficult to distinguish from normal, and are found with estradiol-containing medications and pregnancy.
Decreased values are not generally detectable in children and male adults. In women, low values accompany ovarian failure, menopause, pituitary failure, and Turner’s syndrome.
FACTOR V LEIDEN AND PROTHROMBIN GENE MUTATIONS
Factor V Leiden is a major cause of unexplained thrombotic disease and is responsible for activated protein C (APC) resistance; a mutation in the prothrombin gene is an additional minor risk factor. These mutations can be rapidly detected using molecular methods.
FECAL FAT (stool)
Fat can be found in feces using a special stain and a microscope. Increased fat in feces indicates fat malabsorption.
FECAL MEAT FIBERS (stool)
Fecal meat fibers indicate incomplete digestion and absorption.
Ferritin is a test that indicates the level of stored iron. Low values indicate iron deficiency and high values are seen in both excess iron syndromes (hemochromatosis) and in inflammation of iron storage tissues (e.g. hepatitis). For more information, see the section on Hemachromatosis.
Fibrinogen is an important protein in the blood clotting mechanism. It is measure to determine the nature of bleeding disorders.
Folate (Folic acid and folinic acid) is a vitamin. Low levels are exceedingly rare as many foods are now fortified with folate. Low folate is due to extreme dietary deficiency and causes a particular form of anemia and skin disease. High levels are only seen with folate ingestion. High homocysteine may be due to relative folate deficiency. Folate deficiency has been implicated as a cause of neural tube defects in newborns.
FOLLICLE STIMULATING HORMONE (blood)
Follicle Stimulating Hormone (FSH) is a pituitary hormone that controls the maturation of “eggs” in the ovary and spermatozoa in the testes. Values rise in ovulation and ovarian or testicular failure (including menopause). It is low in infertility and pituitary insufficiency. Its level varies throughout a normal menstrual cycle.
GAMMA GLUTAMYL TRANSPEPTIDASE (blood)
Gamma Glutamyl Transpeptidase (GGT) is an enzyme found in the liver. Its values rise in liver disease and excessive alcohol consumption.
GLUCOSE (blood, urine)
Glucose is main form of circulating blood sugar. Its level is under the control of insulin. It is increased following meals and in diabetes. It may be low in response to meals (reactive hypoglycemia) or in excess insulin situations (too much insulin administration without food or an insulin-secreting tumor). Its main use is the diagnosis of diabetes or monitoring diabetic therapy. For more information, see the section on Diabetes.
GLUCOSE GESTATIONAL (blood)
Pregnant women are usually checked for the presence of Gestational Diabetes.
Screening for Gestational Diabetes
A 50 gram glucose drink is administered between the 24th and 28th week without regard to time of day or time of last meal. The serum glucose is measured 1 hour later. A high value of glucose indicates the need for a full diagnostic glucose tolerance test.
Full diagnostic Tolerance Test
A 100 gram glucose drink is administered in the morning after an overnight fast, and after at least 3 days of unrestricted diet and normal physical activity. Blood glucose is measured at 0, 1, 2, and 3 hours post-drink. For more information, see the section on Diabetes.
GLUCOSE TOLERANCE (blood)
The Glucose Tolerance test is the definitive test for diabetes mellitus. It is only carried out when this diagnosis cannot be made from single glucose measurements. It consists of taking a sugar drink followed by several blood glucose measurements. For more information, see the section on Diabetes.
The gram stain is the microscopic examination of a slide from a microbiology swab or a microbiology plate. A special stain is used. This study shows the presence of bacteria if they are present and categorizes them as gram positive (stain accepting) or gram negative. The test allows for a rapid initial assessment of bacterial infection.
GROUP B STREPTOCOCCUS (by REAL-TIME PCR)
Group B Streptococcus (GBS) is the leading cause of bacterial sepsis and meningitis for the last two decades. Screening during late pregnancy can help prevent neonatal infections. Real-time PCR allows rapid and sensitive detection of GBS.
Haptoglobin is a protein that “mops up” hemoglobin that is free (outside the red cells) in the serum. After binding hemoglobin it is removed from the serum. Low levels are found in red cell destruction (hemolytic anemia).
H. PYLORI BREATH TEST (breath)
The H. pylori breath test is used for the evaluation of gastritis and peptic ulcers. For more information, see the section on Helicobacter pylori.
HEMATOLOGY PANEL (blood)
The Hematology Panel (profile) is the mainstay of hematology diagnosis. For more information, see the sections on the Hematology Profile and Anemia.
Hemoglobin is a complex iron-containing protein that is responsible for carrying oxygen from the lungs to all the tissues of the body. It is measured as part of the Hematology Panel. Low hemoglobin is referred to as anemia: or more information, see the sections on the Hematology Profile and Anemia. High levels of hemoglobin (erythrocytosis) may be secondary (eg. hypoxia) or primary (eg. myeloproliferative disorder, or PRV).
HEMOGLOBIN A1c (blood)
Normal hemoglobin binds blood sugar (glucose) through an irreversible process known as glycosylation. The higher the glucose level, the greater the level of modified (glycated) hemoglobin. Because the life span of a red cell is about 120 days, the amount of glycated hemoglobin is a measure of the “average” blood glucose level over a four-month period. Its measurement is therefore recommended in all diabetics as a way of assessing diabetic control. For more information, see the section on Diabetes.
HEMOGLOBIN A2 (blood)
Hemoglobin A2 is one of the forms of Hemoglobin. It is measured to determine whether you have an abnormal type of hemoglobin (hemoglobinopathy) or an abnormal production of hemoglobin (thalassemia). For more information, see the section on Thalassemia.
HEMOGLOBIN, FETAL (blood)
Hemoglobin F is normally found in unborn babies and newborns. It is also present in adults with abnormal hemoglobin production.
When the plasma hemoglobin exceeds 5 g/L, the capacity of haptoglobin and hemopexin to “mop it up” may be exceeded and the free hemoglobin passes through the glomerulus and into the kidney tubule. Some hemoglobin enters the urine as free hemoglobin. The hemoglobin is absorbed by the proximal tubule and converted into hemosiderin. Hemosiderinuria occurs when the cells lining the proximal tubule are shed into the urine.
HEPATITIS A IGM (blood)
Hepatitis A IgM is the standard test for the presence of Hepatitis A infection. A positive test indicates that Hepatitis A infection has occurred in the past six months. A negative test indicates no infection from this agent in the past six months. For more information, see the section on Hepatitis.
HEPATITIS B CORE ANTIBODY (blood)
A positive Hepatitis B core antibody test indicates the presence of antibodies to a part of the Hepatitis B virus. It is present very early on in a Hepatitis B infection and may stay positive for decades after the infection has resolved. Immunization does not produce positive results. Thus, a positive does not diagnose Hepatitis B; the Hepatitis B antigen test must be done to diagnose the presence of infection. For more information, see the section on Hepatitis.
HEPATITIS B SURFACE ANTIBODY (blood)
A positive Hepatitis B surface antibody test indicates that an individual has either been immunized with Hepatitis B vaccine or has encountered the actual Hepatitis B virus. For more information, see the section on Hepatitis.
HEPATITIS B SURFACE ANTIGEN (blood)
The test for Hepatitis B surface antigen tells whether the Hepatitis B virus is present. A positive test is found in active Hepatitis B infection, in chronic Hepatitis B, and in the Hepatitis B carrier state. For more information, please see the section on Hepatitis.
HEMOGLOBIN ELECTROPHORESIS (blood)
Hemoglobin electrophoresis was the main test to detect the presence of abnormal hemoglobin formation. Today, we do this study using High Pressure Liquid Chromatography but the name “electrophoresis” is still commonly used. For more information, please see the section on Thalassemia.
HLA B27 (blood)
HLA B27 is strongly associated with ankylosing spondylitis and several other rheumatic diseases. Molecular methods allow specific and sensitive detection of most subtypes of the HLA B27 allele.
The level of homocyst(e)ine has been shown to be a risk factor for vascular disease and thromboembolism. For more information, see the section on Homocysteine.
HUMAN CHORIONIC GONADOTROPIN (blood, urine)
Human Chorionic Gonadotropin (hCG) is a hormone produced by a pregnancy (chorion layer of the placenta) as a way to nurture the pregnancy in the first few months. It is present in the maternal blood on the first day the pregnancy is established. It moves quickly from the blood into the urine. The qualitative (i.e. ‘positive’ or ‘negative’) detection of hCG is the basis of urine pregnancy tests. Quantitative (i.e. numerical) measurements of hCG are used to diagnose pregnancy as well as reveal problems such as ectopic pregnancy or hydatidiform mole, spontaneous abortion, or tumors such as choriocarcinoma of the ovary or testes. HCG is also part of the Triple Marker Test. For more information, see the section on the Pregnancy.
HUMAN GROWTH HORMONE (blood)
Human Growth Hormone (GH) is produced by the pituitary gland. It stimulates the growth of bones. Too much causes gigantism and acromegaly, while too little results in dwarfism.
HUMAN PAPILLOMAVIRUS (HPV)
HPV, or human papillomavirus, is the most common sexually transmitted infection in the world. There are more than 100 different types of HPV that may cause a variety of diseases ranging from warts to cancer. Cervical cancer is strongly associated with high-risk HPV infection. Identifying the virus can shed light on the cause of abnormal cells seen during a Pap test and help identify patients who may require further tests. A positive result for any of the high-risk types can be closely monitored for any pre-cancerous conditions. A negative test for high-risk HPV can provide peace of mind that the patient is at lower risk for developing cervical cancer.
5-HYDROXYINDOLEACETIC ACID (urine)
5-HIAA is produced in excess and can be measured in the urine by an unusual tumor of the intestine known as a carcinoid. The presence of a carcinoid tumor may cause symptoms of hot flushes, hypertension and a tricuspid valve heart murmur.
IMMUNOGLOBULIN A (blood)
Immunoglobulin A (IgA) is a type of antibody that is found on mucous membranes, (e.g. gastrointestinal tract, vagina). If deficient, patients may suffer from chronic diarrhea and certain types of recurrent infections. For more information, see the section on Multiple Myeloma.
IMMUNOGLOBULIN E (blood)
Immunoglobulin E (IgE) is a type of antibody that mediates the allergic reaction. It is normally very useful in removing unwanted materials from the body. In allergic persons the level may be abnormally high. Thus, it can be used to tell if a condition is allergic in nature. It does not reveal to what a person may be allergic. For this purpose, the Specific Allergen IgE (SAIGE) test is used. For more information, see the section on Allergies.
IMMUNOGLOBULIN G (blood)
Immunoglobulin G (IgG) molecules are the main type of antibodies that circulate in the blood stream. General increases in IgG accompany infections and inflammation and malignancy of the antibody producing cells. Decreases occur in some infections and may also be due to a genetic or acquired defect. When the IgG is low a person may be susceptible to infections. For more information, see the section on Multiple Myeloma.
IMMUNOGLOBULIN M (blood)
Immunoglobulin M (IgM) molecules are antibodies that are recently formed and appear early in infections. Increased values are seen in: acute stimulation of the immune system (immunization, viral infections), selective increase is seen in neoplasia, IgM myeloma, and Waldenstrom’s macroglobulinemia. Decreased values are seen in: children, agammaglobulinemia (inherited) and myeloma. For more information, see the section on Multiple Myeloma.
IMMUNOFIXATION (blood, urine)
Immunofixation is a chemical method for identifying abnormal proteins in serum and urine. It is used in the diagnosis and monitoring of multiple myeloma or monoclonal gammopathy of uncertain significance (MGUS). For more information, see the section on Multiple Myeloma.
The International Normalized Ratio (also known as the Prothrombin Time) measures defects in part of the clotting mechanism. It is used to detect clotting abnormalities. However, it is mainly used to measure the effect of “blood thinning” medications such as Warfarin or Coumadin. For more information, please see the section on Anticoagulant Monitoring.
Measuring insulin is useful in diagnosing insulinoma, when used in conjunction with proinsulin and C-peptide measurements. It is not part of the initial diagnostic work-up of diabetes.
Iron is an essential nutrient required for normal hemoglobin formation. Inadequate iron may lead to iron deficiency anemia. The test is used to measure iron stores although the Serum Ferritin test is the recommended initial test for assessment of iron disorders.
JOINT FLUID CRYSTAL
The diagnosis of the cause of a swollen joint may be enhanced by removing some of the joint fluid and examining it under a microscope. Gout is characterized by the presence of uric acid crystals.
JOINT FLUID DIFFERENTIAL
The cells in a joint fluid are examined to determine the cause of a joint swelling.
JOINT FLUID, PROTEIN
The amount of protein in a joint fluid helps to determine the cause of the swelling.
The KOH test is used to identify fungus infections of the skin.
LACTATE DEHYDROGENASE (blood)
Lactate Dehydrogenase (LDH, LD) is an enzyme found in any cell that metabolized glucose, and so is present in most tissues. An increase in the LD commonly indicates inflammation in the liver or muscle or destruction of red cells.
Lead poisoning is generally due to chronic overexposure in industry or in children with pica (compulsive eating of non-foods). The overt syndrome involves GI symptoms, convulsions, coma, abdominal pain, neuropathy, anemia, and nephropathy. Chronic exposure may cause decreased mental functioning without ever causing the overt syndrome. While it is recommended that values in children should not exceed 0.48 umol/L, a study in a Vancouver population found it unusual for local children to exhibit a blood lead level greater than 0.25 umol/L.
LEUCOCYTE ALKALINE PHOSPHATASE (blood)
Leucocyte Alkaline Phosphatase is present in the granules of neutrophils. The amount can be measured as the LAP score. Stimulated neutrophils have higher concentrations. LAP is decreased in CML and the LAP score has been used to distinguish CML from reactive neutrophilia and other myeloproliferative syndromes. Increased values are seen in neutrophil response to infection, inflammation, tumor, necrosis; polycythemia, and essential thrombocythemia. Decreased values are seen in chronic myelogenous leukemia (CML), some myelodysplastic syndromes, acute myelogenous leukemia (AML), paroxysmal nocturnal hemoglobinuria (PNH), idiopathic thrombocytopenic purpura (ITP), pernicious anemia, and infectious mononucleosis. However, the LAP score is now considered an obsolete test: more specific molecular tests are done in suspected cases of myeloproliferative disorders using peripheral blood.
LUTEINIZING HORMONE (blood)
Luteinizing Hormone (LH) is made by the pituitary gland. It controls the production of Estrogen in the ovary and Testosterone from the testes. It is elevated in ovarian failure (menopause) or testicular failure. It is low in pituitary insufficiency.
Lipase is an enzyme produced by the pancreas to facilitate fat digestion. It is elevated in pancreatitis, an inflammation of the pancreas causing severe abdominal pain.
Lithium is a drug that is used for the treatment of manic-depressive (bipolar) disorders. Effective medication with Lithium is adjusted by measuring its level in serum. For more information, see the section on Therapeutic Drug Monitoring.
Reference Range update
MAGNESIUM (blood, urine)
Magnesium is an element that is necessary for many metabolic functions (an enzyme co-factor). Its reduction may occur in gastrointestinal and nutritional disorders.
Measuring urine magnesium is probably the best way to determine magnesium status, but only when performed as part of a loading test.
Malaria is a parasitic infection of the red cells that is transmitted by mosquitoes commonly found in equatorial countries. It is diagnosed by careful examination of the blood with a microscope. The chances of finding malarial parasites in blood are high if blood is taken during febrile episodes, but a negative test does not exclude malaria, especially in patients with history of travel to malarial zones. Note that malaria is no longer just a tropical disease: it is becoming increasingly common in Canada due to world-wide travel and malarial resistance to some anti-malarial drugs.
MERCURY (blood, urine)
Metallic mercury (Hg) is essentially nontoxic, and may be consumed orally without significant side effects. Inhaled Hg vapor, however, can lead to both chronic and acute intoxications through chemical modification to the ionized (Hg2+) species. Further conversion to methyl mercury results in a high toxic species with a preference for fatty tissues and nerve cells. Small amounts of mercury are released from dental amalgams but this level of exposure is not considered significant. The most significant source of mercury exposure for most persons is seafood, especially with larger predatory fish (e.g. tuna, swordfish) or long-lived species (e.g. rock cod).
A rare tumor of the adrenal gland (pheochromocytoma) may secrete compounds such as adrenaline which cause severe hypertension. ‘Metanephrines’ refer to adrenaline and nonadrenaline metabolites: these can be detected in the urine, where they provide a convenient screening test for pheochromocytoma. For more information, see the section on Adrenal Disease.
METHICILLIN-RESISTANT STAPHYLOCOCCUS AUREUS (by PCR)
S. aureus is a microorganism that may acquire resistance to a class of antibiotics and detection of this is important for infection control. Molecular methods allow rapid and simultaneous identification of Staphylococcus aureus and the genes associated with antibiotic resistance and virulence. This assay also identifies those strains likely to be associated with the community-acquired variants of MRSA.
Despite its name, Microalbumin is not “small albumin” but rather normal albumin present at low levels in the urine. It is an important test to be carried out periodically in all diabetics in order to detect the early signs of kidney disease. In order to compensate for the variable concentration of urine, Microalbumin is usually reported as a ratio to urine creatinine, or the ‘Albumin-Creatinine Ratio’ (ACR). For more information, see the section on Diabetes.
MONO TEST (blood)
The mono test is important for diagnosis of infectious mononucleosis. For more information, see the section on Infectious Mononucleosis.
The blood morphology examination is done by examining a smear of blood on a glass slide using a microscope. It is done when results of the Hematology Panel (Profile) are abnormal, and more clearly identifies the nature of the abnormality. For more information, see the section on the Hematology Profile.
OCCULT BLOOD (stool)
Occult blood means unseen blood. It is a test carried out on feces as a way of detecting the presence of a bowel lesion such as cancer of the colon. For more information, see the section on Colon Cancer.
OVA & PARASITE (stool)
The O&P test is carried out by examining feces (stool samples) looking for worms or other abnormal organisms that may be the cause of diarrhea or malabsorption.
Olanzapine (Zyprexa®) is an anti-psychotic drug. Measurement is carried out to adjust the dosage. For more information, see the section on Therapeutic Drug Monitoring.
Oxalate is a metabolic end product that is excreted into the urine. High levels may cause calcium oxalate stones to form. Causes of increased oxalate include diet (e.g. animal protein, purines, gelatin, calcium, strawberries, pepper, rhubarb, beans, beets, spinach, tomatoes, chocolate, cocoa, tea), intestinal disease and rare inborn errors of metabolism.
PATERNITY TESTING (blood)
Resolution of paternity and other biological relationships can be achieved using DNA methods. PCR-based testing provides rapid and accurate genetic fingerprints of individuals. These results can be compared between child and alleged father in order to determine the likelihood of paternity. The method is also applicable to other relationships in question, such as maternity or sib-ship. There is a fee for this service – please refer to the appropriate section of this website for more information or call 1-800-663-9422 ext. 4535. For more information on paternity testing at LifeLabs, please click here.
Phenobarbital is a drug used for the treatment of seizure disorders. Adjustment of the appropriate dosage is made by measuring the drug in serum. For more information, see the section on Therapeutic Drug Monitoring.
Phenytoin is an anticonvulsant drug. Adjustment of the appropriate dosage is made by measuring the drug in serum. For more information, see the section on Therapeutic Drug Monitoring.
PHOSPHORUS, INORGANIC (blood, urine)
Phosphorus is an essential element needed in every cell. It is closely linked to calcium metabolism. Phosphorus is present in the body in two basic forms: (1) bound to organic molecules (such as ATP) where it often serves to provide energy transfer, and (2) in a free (ionic) form as phosphate ion (PO4). This latter group is referred to as the inorganic phosphorus (or phosphate). Inorganic phosphate is closely associated with acid-base balance, calcium metabolism, and glucose flux into cells. Increased values are seen in: compensation for primary hypocalcemia (hypoparathyroid), hypercalcemia of rapid bone resorption, renal disease. Decreased values are seen in: compensation for primary hypercalcemia (1° hyperparathyroidism), post-glucose ingestion or insulin administration, and Fanconi’s syndrome of the proximal tubule.
Urine phosphorus is measured as a secondary test to understand calcium metabolism.
Pinworms are tiny, almost microscopic worms, which infect the anal area and cause itching. They can be identified under a microscope. The collection of the sample is done using tape fixed to a microscope slide.
PLATELET COUNT (blood)
Platelets are tiny cells found in the blood stream. They have the ability to stick together and form tiny “plugs” in any damage to blood vessels. Thus, they are necessary to stop bleeding. Low levels of platelets indicate a bleeding or bruising tendency. High levels may be associated with unwanted blood clot formation or bleeding. Platelets may be normal in amount but may sometimes function abnormally. This problem requires special tests to diagnose.
PORPHYRINS (blood, urine, feces)
A group of inherited disorders of hemoglobin metabolism are called porphyrias. They result in a spectrum of syndromes from mental derangement, light sensitive skin disease, and acute abdominal pain. These disorders can be diagnosed by measuring porphyrins in the blood, serum, urine, and feces, though the recommended first-tier test is urine porphobilinogen.
POTASSIUM (blood, urine)
Potassium is an important element that is necessary for proper cell membrane activity. Too much potassium causes cells to “twitch” inappropriately and if high enough may cause the heart to go into an arrhythmia. Common causes are renal failure, hemolysis, adrenal insufficiency, acidosis, dehydration and drugs. Low potassium may cause muscle weakness and paralysis. Decreased values are seen with prolonged vomiting/diarrhea, hyperaldosteronism and drugs.
The urine potassium level indicates the level of potassium in the diet (as long as the serum level is normal).
PREGNANCY TEST (blood, urine)
The pregnancy test detects the presence of Chorionic Gonadotropin in serum or urine. It is turned positive by pregnancy, tumors, or (rarely) unusual antibodies in the serum. The serum test is typically positive within 8-10 days of conception in most women, while the urine test may be unreliable prior to the first week following the first missed menstrual period. Sometimes a pregnancy occurs but is “lost”, though the test may stay positive for some days. For more information, see the section on Pregnancy.
Progesterone is normally present at low levels in women. It rises during the latter third of the menstrual cycle if ovulation has occurred.
PROGESTERONE, 17OH (blood)
17OH Progesterone is made by the adrenal gland. It is raised if there is an internal malfunction of adrenal metabolism. In such cases, the adrenal may manufacture excessive androgens. Thus, the test is used to investigate hirsutism (excessive body hair in women), as well as suspected congenital adrenal hyperplasia.
Prolactin is a pituitary hormone whose purpose is to mediate lactation following delivery. It is elevated in certain tumors of the pituitary gland in both men and women. Increased values are seen in: pregnancy and lactation, prolactin secreting adenomas (values over 6000 are almost always prolactinomas), non-functional tumours of the hypothalamus, empty sella syndrome and certain drugs ( neuroleptics: fluphenazine, haloperidol; anti-emetics: metoclopramide, domperidone; antidepressants: imipramine, amitriptyline, and methyldopa, opiates, estrogens, cimetidine). Miscellaneous causes include: renal disease, surgical stress, sleep, exercise, sexual intercourse, hypoglycemia, post epileptic siezure, breast stimulation and chest lesions, primary hypothyroidism, cirrhosis, and spinal cord disease. Reduced values have no clinical significance but may be caused by bromocriptine.
PROTEIN ELECTROPHORESIS (blood, urine)
Protein electrophoresis is a chemical technique used to identify abnormal proteins, particularly monoclonal immunoglobulins. For more information, see the section on Multiple Myeloma.
PROTEIN, TOTAL (blood, urine)
Total protein measurements determine the amount of all the proteins in serum or urine. This test is used in conjunction with measurements of albumin or with protein electrophoresis to study deficiencies or over- production of serum protein production. Normally, very little protein is present in urine: high amounts may indicate kidney disease.
PROTHROMBIN TIME (blood)
The International Normalized Ratio (also known as the Prothrombin Time) measures defects in part of the clotting mechanism. It is used to detect clotting abnormalities. However, it is mainly used to measure the effect of “blood thinning” medications such as Warfarin our Coumadin. For more information, see the section on Anticoagulant Monitoring.
PROSTATE-SPECIFIC ANTIGEN (blood)
The Prostate Specific Antigen (PSA) is used to detect the presence of prostatic carcinoma or to monitor the effect of therapy. It is not recommended as a screening test, though it may be performed as such on a patient-pay basis. For more information, see the section on Prostate Cancer.
PARATHYROID HORMONE (blood)
Parathyroid hormone (PTH) is the main control of calcium in the serum. It is measured to determine why the calcium level is abnormal.
PARTIAL THROMBOPLASTIN TIME (blood)
The activated Partial Thromboplastin Time is a test of the clotting mechanism of the blood. It is used in the diagnosis of bleeding disorders. If abnormal (prolonged), further testing is warranted to identify the specific disorder.
In the inherited disorder known as Acute Intermittent Porphyria patients suffer excruciating abdominal pain and produce excessive amounts of porphobilinogen in their urine.
RHEUMATOID FACTOR (blood)
The Rheumatoid Arthritis test (or Rheumatoid Factor = RF) is used to diagnose Rheumatoid arthritis. For more information, see the section on Rheumatoid Arthritis.
Renin is an enzyme hormone produced by the kidney that controls blood pressure. It should not be confused with rennin that is an enzyme found in milk. Excessive production of renin (certain kidney disease, blood vessel blockage, and extraordinarily rare tumors) will cause a form of high blood pressure. However, it is measured most often to check for suppression by high levels of aldosterone or other mineralocorticoids.
RETICULOCYTE COUNT (blood)
Reticulocytes are newborn red cells. An increased number of reticulocytes in blood indicates that red cells are being formed faster than normal, eg. in response to blood loss or hemolysis. Decreased levels may be seen in nutritional deficiencies or bone marrow failure.
RH FACTOR (blood)
All red cells have a “type” (A, B, AB, and O) and also may be classified as Rh+ or Rh-. Knowing this is important for pregnant women since Rh- women may develop antibodies that can harm a Rh+ baby unless prophylaxis is undertaken.
When a bacterial infection is considered, the potentially infected area is swabbed and the swab sent to the lab. The lab then grows any bacteria that are present, determines whether they are harmful (pathogenic), and determines the antibiotics that will treat the infection most effectively.
SEMEN EXAMINATION (semen)
Semen is examined as part of an infertility work-up. The sample is examined for the presence of spermatozoa, how many are present (the count), the number of abnormal forms present, and whether they move quickly or not (motility). To be performed properly the sample must be examined fresh i.e. within 2 hrs of collection. Semen is also examined following a vasectomy to determine whether it is “complete” or not. In this case, only the presence or absence of sperm is noted (and if present, a count is made).
SICKLE CELL TEST (blood)
Sickle cell anemia is a condition that mainly affects people of African descent. The red cells in this disorder “crumple up” if the oxygen level becomes too low. The cells, under a microscope, become sickle shaped (rather than being “biconcave discs” or dimpled spheres). Sickle Cell Test is a qualitative solubility test for the presence of sickling hemoglobin in human blood.
SODIUM (blood, urine)
Sodium is the major ion in the serum. Abnormalities in sodium concentration cause blood pressure abnormalities and edema and may be due to hormonal or to kidney disease. Increased values are seen in dehydration, IV fluids administration, posterior pituitary disease, adrenal disease, and kidney disease. Decreased values are called hyponatremia and are seen in over-hydration, renal disorders, cardiac failure, pituitary and adrenal disorders, and excessive glucose or lipids.
Urine sodium is often used to evaluate the level of salt in the diet.
THYROXINE, FREE (blood)
Thyroxine (T4) is the main thyroid hormone. It is raised in hyperthyroidism and low in hypothyroidism. Free thyroxine (FT4) refers to the miniscule active fraction of total T4 which is available to the body. For more information, see the section on Thyroid Function Tests.
Testosterone is the main male hormone. In men, low testosterone causes impotence. In women, elevated testosterone causes hirsutism (excess hair) or even virilization (in cancers). Low testosterone may also be the cause of loss of libido. The measurement may be confused by alterations of Sex Hormone Binding Globulin (SHBG), which binds a considerable fraction of available testosterone.
TESTOSTERONE, FREE (blood)
Free testosterone is a special test that directly measures the level of circulating free (unbound) testosterone. It cannot be requested simultaneously with Bioavailable Testosterone.
TESTOSTERONE, BIOAVAILABLE (blood)
Bioavailable testosterone (BAT) refers to the estimation of circulating testosterone that is available to the body. This requires the measurement of total testosterone, sex hormone binding globulin (SHBG) and albumin. Specifically, BAT is taken as testosterone not bound to SHBG. Using these results, it is also possible to calculate the level of free testosterone (cFT), which compares very well with free testosterone values obtained through high-level reference methods.
THALASSAEMIA INVESTIGATION (blood)
Thalassemia is the abnormal production of otherwise normal hemoglobin molecules. It is covered in more detail in another section. For more information, see the section on Thalassemia.
Theophylline is an asthma drug. It can be measured to enhance the dosage regime. For more information, see the section on Therapeutic Drug Monitoring.
THROMBIN TIME (blood)
The Thrombin Time is a test of the clotting mechanism of the blood. It is employed in work-ups of blood coagulation disorders.
Thyroglobulin is a protein found inside the thyroid gland that helps to store thyroid hormone until it is needed. It is not normally released into the serum, but may be detected with certain thyroid cancers and is used to monitor the success of therapy of these conditions.
THYROGLOBULIN ANTIBODIES (blood)
Thyroglobulin antibodies are sometimes increased in thyroid inflammation (Thyroiditis). They are also measured to ensure that Thyroglobulin results are not falsely decreased (or, less commonly, increased) due to interference. For more information, see the section on Thyroid Function Tests.
THYROID PEROXIDASE ANTIBODIES (blood)
Antibodies to the thyroid peroxidase enzyme TPO (formerly known as anti-microsomal antibodies) are antibodies to thyroid tissue. They are often present at low levels in healthy people, while high levels indicate thyroid disorders such as thyroiditis. For more information, see the section on Thyroid Function Tests.
TOTAL IRON BINDING CAPACITY (blood)
Iron is an essential element that is transported in the plasma (i.e. non-red cell fraction of blood) almost exclusively by the protein rransferrin (see below). The total iron binding capacity (TIBC) is a measure of the total level of iron that can be bound by serum proteins. As the TIBC level is very similar to that of transferrin, the two are usually considered to be interchangeable. Measurements of TIBC are used as a secondary test of iron metabolism (ferritin is the primary test) in conjunction with serum iron. These results are used to calculate the percentage saturation of serum protein iron binding sites, which is used to evaluate iron status (deficiency and overload). See also: Transferrin. For more information, see the section on Hemachromatosis.
Transferrin is a key protein involved in iron transport throughout the body. Measurements of its level are used as a secondary test of iron metabolism (ferritin is the primary test) in conjunction with serum iron. These results are used to calculate the percentage saturation of transferrin’s iron binding sites, which is used to evaluate iron status (deficiency and overload). See also: Total Iron Binding Capacity. For more information, see the section on Hemachromatosis.
TRICHOMONAS WET MOUNT
Trichomonas is a microorganism that infects the genital region. It is detected by collecting a drop of fluid and examining it under a microscope.
Triglycerides are fats that circulate in the blood stream. Their measurement helps provide a profile of the blood fat (lipid) content and is used to calculate levels of LDL (“bad cholesterol”). For more information, see the section on Cholesterol.
THYROID STIMULATING HORMONE (blood)
Thyroid Stimulating Hormone (TSH) is produced by the pituitary gland and it controls the thyroid gland. It is the main thyroid function test: values rise in hypothyroidism and decline in hyperthyroidism. TSH levels are slow to respond to changes in thyroid functions (typically over a period of weeks), but they show a more pronounced change than is seen for free T4. For more information, see the section on Thyroid Function Tests.
UREA (blood, urine)
Urea is a waste product of protein metabolism. It is removed from the body by the kidney. Its measurement is used as a test of kidney function because when the kidney is impaired, levels of urea go up. It is also known as the BUN (blood urea nitrogen).
URIC ACID (blood, urine)
Uric acid is a waste product of nucleic acid (DNA, RNA) metabolism. It is elevated in gout, kidney disease and in conditions where body tissues degenerate rapidly. A high urine uric acid is a risk factor for uric acid stone formation.
Macroscopic urinalysis is a common screening test where a urine sample is tested for several chemical substances (glucose, protein, blood, etc.). If results are abnormal, the sample may then be examined under a microscope (see Urinalysis, microscopic). For more information, see the section on Urinalysis.
Microscopic urinalysis consists of examining urine under a microscope to look primarily for red blood cells, white blood cells and degraded cell deposits from the walls of the urinary tract (casts). This information is used to help diagnose urinary tract infection (UTI) and other disorders of the kidney. For more information, see the section on Urinalysis.
Usually detected as part of routine urinalysis. For more information, see the section on Urinalysis.
Obstruction to bile flow
Negative if complete
Decreased early, increased late
Decreased early, increased late
VALPROIC ACID (blood)
An anti-epileptic drug. Serum measurements help with dosage adjustment. For more information, see the section on Therapeutic Drug Monitoring.
VANILLYLMANDELIC ACID (blood)
Vanillylmandelic acid (VMA) is a catecholamine metabolite. Levels in urine are used as a screening test for the rare tumour pheochromocytoma, though current guidelines recommend urine metanephrines for this purpose. For more information, see the section on Adrenal Disease.
In certain conditions (macroglobulinemia, myeloma) the blood becomes thick and sluggish. Viscosity measurements quantitate this abnormality.
VITAMIN B12 (blood)
Vitamin B12 deficiency may lead to macrocytic anemia and to neurological degeneration of part of the spinal cord.
VITAMIN D (blood)
Vitamin D status is measured as the total level of its 25-hydroxy vitamin D (25-OH Vit D) metabolites 25-OH Vit D3, derived endogenously from sunlight, and 25-OH Vit D2 obtained from supplementation. Serum levels decrease with age and pregnancy and vary with sun exposure and supplementation; values tend to be highest in late summer and lowest in spring. Severe deficiency manifests as rickets in children and osteomalacia in adults and is characterized by decreased serum calcium and phosphorus and increased alkaline phosphatase. Vitamin D intoxication caused by extremely large doses (50,000 – 100,000 IU/day: for comparison, Health Canada recommends no more than 2000 IU/day from all sources) is more common in infants and children than adults and results in hypercalcemia, increased risk of renal stones, soft-tissue calcification, gastrointestinal symptoms, and growth and mental retardation. Note that extremely high levels may be observed in hypoparathyroid patients receiving physiological doses of Vitamin D.
LifeLabs measures both 25-OH Vitamin D3 and 25-OH Vit D2 and reports the sum of their concentrations, as their physiological activities are considered to be equal. Optimum levels are still a matter of debate, but there is agreement that levels below 50 nmol/L are insufficient and those above 200 nmol/L are toxic.
1,25-Dihydroxy Vitamin D3 (1,25-OH2 Vit D), the principal active form of Vit D, is formed by renal cells and reflects calcium synthesis by the body. Secretion is influenced by parathyroid hormone (PTH) and the calcium and phosphorus content of the diet. The main reasons for measuring 1,25-OH2 Vit D are to differentiate primary hyperparathyroidism from hypercalcemia of cancer, distinguish between Vit D-dependent and Vit D-resistant rickets, monitor the Vit D status of patients with chronic renal failure, and assess compliance of 1,25-OH2 Vit D therapy. It is often useful to measure PTH in conjunction with 1,25-OH2 Vit D.
Testing for 1,25-OH2 Vit D is typically indicated only for monitoring 1,25-OH2 Vit D therapy or determining Vit D status in patients with significant renal disease. For all other clinical situations, Vit D status is best measured via 25-OH Vit D.
Vitamin D update
VANCOMYCIN-RESISTANT ENTEROCOCCI (by PCR)
Vancomycin-resistant enterococci (VRE) are a class of antibiotic-resistant microorganisms. They have emerged as an important cause of nosocomial infections. The molecular method allows rapid and simultaneous identification of major VRE variants: Enterococcus faecalis, E. faecium, E. gallinarum, E. casseliflavus and E. flavescens.
White Blood Cell count is part of the routine Hematology Panel.
Whipple’s Disease is a systemic infection characterized by fever, weight loss, diarrhea, polyarthritis and adenopathy. The causative organism is very difficult to culture; however, molecular methods can detect its presence in peripheral blood samples. The ideal sample is tissue biopsy, so negative results from peripheral blood do not rule out infection.
Yeast, such as Candida albicans can cause infections of the mouth or genitalia. The microbiology lab can detect the presence of yeast. Although there is much non-medical information about yeast in the blood, this is something that only occurs in rare situations (patients who are gravely ill).
YERSINIA (by PCR)
Yersinia species are among the many microorganisms that cause diarrhea and other gastrointestinal symptoms. The molecular method allows rapid and simultaneous detection of major pathogenic Yersinia species.
ZINC (blood, urine)
Zinc is not very toxic. However, true zinc deficiency may lead to the failure of wounds to heal properly. | 1 | 6 |
<urn:uuid:3511f3d8-3aa3-410f-a309-104870409c87> | In computing, the kernel is the main component of most computer operating systems; it is a bridge between applications and the actual data processing done at the hardware level. The kernel's responsibilities include managing the system's resources (the communication between hardware and software components). Usually, as a basic component of an operating system, a kernel can provide the lowest-level abstraction layer for the resources (especially processors and I/O devices) that application software must control to perform its function. It typically makes these facilities available to application processes through inter-process communication mechanisms and system calls.
Operating system tasks are done differently by different kernels, depending on their design and implementation. While monolithic kernels execute all the operating system code in the same address space to increase the performance of the system, microkernels run most of the operating system services in user space as servers, aiming to improve maintainability and modularity of the operating system. A range of possibilities exists between these two extremes.
The kernel's primary function is to manage the computer's hardware and resources and allow other programs to run and use these resources. Typically, the resources consist of:
A kernel may implement these features itself, or rely on some of the processes it runs to provide the facilities to other processes, although in this case it must provide some means of IPC to allow processes to access the facilities provided by each other.
Finally, a kernel must provide running programs with a method to make requests to access these facilities.
The kernel has full access to the system's memory and must allow processes to safely access this memory as they require it. Often the first step in doing this is virtual addressing, usually achieved by paging and/or segmentation. Virtual addressing allows the kernel to make a given physical address appear to be another address, the virtual address. Virtual address spaces may be different for different processes; the memory that one process accesses at a particular (virtual) address may be different memory from what another process accesses at the same address. This allows every program to behave as if it is the only one (apart from the kernel) running and thus prevents applications from crashing each other.
On many systems, a program's virtual address may refer to data which is not currently in memory. The layer of indirection provided by virtual addressing allows the operating system to use other data stores, like a hard drive, to store what would otherwise have to remain in main memory (RAM). As a result, operating systems can allow programs to use more memory than the system has physically available. When a program needs data which is not currently in RAM, the CPU signals to the kernel that this has happened, and the kernel responds by writing the contents of an inactive memory block to disk (if necessary) and replacing it with the data requested by the program. The program can then be resumed from the point where it was stopped. This scheme is generally known as demand paging.
Virtual addressing also allows creation of virtual partitions of memory in two disjointed areas, one being reserved for the kernel (kernel space) and the other for the applications (user space). The applications are not permitted by the processor to address kernel memory, thus preventing an application from damaging the running kernel. This fundamental partition of memory space has contributed much to current designs of actual general-purpose kernels and is almost universal in such systems, although some research kernels (e.g. Singularity) take other approaches.
To perform useful functions, processes need access to the peripherals connected to the computer, which are controlled by the kernel through device drivers. A device driver is a computer program that enables the operating system to interact with a hardware device. It provides the operating system with information of how to control and communicate with a certain piece of hardware. The driver is an important and vital piece to a program application. The design goal of a driver is abstraction; the function of the driver is to translate the OS-mandated function calls (programming calls) into device-specific calls. In theory, the device should work correctly with the suitable driver. Device drivers are used for such things as video cards, sound cards, printers, scanners, modems, and LAN cards. The common levels of abstraction of device drivers are:
1. On the hardware side:
2. On the software side:
For example, to show the user something on the screen, an application would make a request to the kernel, which would forward the request to its display driver, which is then responsible for actually plotting the character/pixel.
A kernel must maintain a list of available devices. This list may be known in advance (e.g. on an embedded system where the kernel will be rewritten if the available hardware changes), configured by the user (typical on older PCs and on systems that are not designed for personal use) or detected by the operating system at run time (normally called plug and play). In a plug and play system, a device manager first performs a scan on different hardware buses, such as Peripheral Component Interconnect (PCI) or Universal Serial Bus (USB), to detect installed devices, then searches for the appropriate drivers.
As device management is a very OS-specific topic, these drivers are handled differently by each kind of kernel design, but in every case, the kernel has to provide the I/O to allow drivers to physically access their devices through some port or memory location. Very important decisions have to be made when designing the device management system, as in some designs accesses may involve context switches, making the operation very CPU-intensive and easily causing a significant performance overhead.
A system call is a mechanism that is used by the application program to request a service from the operating system. They use a machine-code instruction that causes the processor to change mode. An example would be from supervisor mode to protected mode. This is where the operating system performs actions like accessing hardware devices or the memory management unit. Generally the operating system provides a library that sits between the operating system and normal programs. Usually it is a C library such as Glibc or Windows API. The library handles the low-level details of passing information to the kernel and switching to supervisor mode. System calls include close, open, read, wait and write.
To actually perform useful work, a process must be able to access the services provided by the kernel. This is implemented differently by each kernel, but most provide a C library or an API, which in turn invokes the related kernel functions.
The method of invoking the kernel function varies from kernel to kernel. If memory isolation is in use, it is impossible for a user process to call the kernel directly, because that would be a violation of the processor's access control rules. A few possibilities are:
An important consideration in the design of a kernel is the support it provides for protection from faults (fault tolerance) and from malicious behaviors (security). These two aspects are usually not clearly distinguished, and the adoption of this distinction in the kernel design leads to the rejection of a hierarchical structure for protection.
The mechanisms or policies provided by the kernel can be classified according to several criteria, including: static (enforced at compile time) or dynamic (enforced at run time); pre-emptive or post-detection; according to the protection principles they satisfy (i.e. Denning); whether they are hardware supported or language based; whether they are more an open mechanism or a binding policy; and many more.
Support for hierarchical protection domains is typically that of CPU modes. An efficient and simple way to provide hardware support of capabilities is to delegate the MMU the responsibility of checking access-rights for every memory access, a mechanism called capability-based addressing. Most commercial computer architectures lack MMU support for capabilities. An alternative approach is to simulate capabilities using commonly supported hierarchical domains; in this approach, each protected object must reside in an address space that the application does not have access to; the kernel also maintains a list of capabilities in such memory. When an application needs to access an object protected by a capability, it performs a system call and the kernel performs the access for it. The performance cost of address space switching limits the practicality of this approach in systems with complex interactions between objects, but it is used in current operating systems for objects that are not accessed frequently or which are not expected to perform quickly. Approaches where protection mechanism are not firmware supported but are instead simulated at higher levels (e.g. simulating capabilities by manipulating page tables on hardware that does not have direct support), are possible, but there are performance implications. Lack of hardware support may not be an issue, however, for systems that choose to use language-based protection.
An important kernel design decision is the choice of the abstraction levels where the security mechanisms and policies should be implemented. Kernel security mechanisms play a critical role in supporting security at higher levels.
One approach is to use firmware and kernel support for fault tolerance (see above), and build the security policy for malicious behavior on top of that (adding features such as cryptography mechanisms where necessary), delegating some responsibility to the compiler. Approaches that delegate enforcement of security policy to the compiler and/or the application level are often called language-based security.
The lack of many critical security mechanisms in current mainstream operating systems impedes the implementation of adequate security policies at the application abstraction level. In fact, a common misconception in computer security is that any security policy can be implemented in an application regardless of kernel support.
Typical computer systems today use hardware-enforced rules about what programs are allowed to access what data. The processor monitors the execution and stops a program that violates a rule (e.g., a user process that is about to read or write to kernel memory, and so on). In systems that lack support for capabilities, processes are isolated from each other by using separate address spaces. Calls from user processes into the kernel are regulated by requiring them to use one of the above-described system call methods.
An alternative approach is to use language-based protection. In a language-based protection system, the kernel will only allow code to execute that has been produced by a trusted language compiler. The language may then be designed such that it is impossible for the programmer to instruct it to do something that will violate a security requirement.
Advantages of this approach include:
Edsger Dijkstra proved that from a logical point of view, atomic lock and unlock operations operating on binary semaphores are sufficient primitives to express any functionality of process cooperation. However this approach is generally held to be lacking in terms of safety and efficiency, whereas a message passing approach is more flexible. A number of other approaches (either lower- or higher-level) are available as well, with many modern kernels providing support for systems such as shared memory and remote procedure calls.
The idea of a kernel where I/O devices are handled uniformly with other processes, as parallel co-operating processes, was first proposed and implemented by Brinch Hansen (although similar ideas were suggested in 1967). In Hansen's description of this, the "common" processes are called internal processes, while the I/O devices are called external processes.
Similar to physical memory, allowing applications direct access to controller ports and registers can cause the controller to malfunction, or system to crash. With this, depending on the complexity of the device, some devices can get surprisingly complex to program, and use several different controllers. Because of this, providing a more abstract interface to manage the device is important. This interface is normally done by a Device Driver or Hardware Abstraction Layer. Frequently, applications will require access to these devices. The Kernel must maintain the list of these devices by querying the system for them in some way. This can be done through the BIOS, or through one of the various system buses (Such as PCI/PCIE, or USB.) When an application requests an operation on a device (Such as displaying a character), the kernel needs to send this request to the current active video driver. The video driver, in turn, needs to carry out this request. This is an example of Inter Process Communication (IPC).
Naturally, the above listed tasks and features can be provided in many ways that differ from each other in design and implementation.
The principle of separation of mechanism and policy is the substantial difference between the philosophy of micro and monolithic kernels. Here a mechanism is the support that allows the implementation of many different policies, while a policy is a particular "mode of operation". For instance, a mechanism may provide for user log-in attempts to call an authorization server to determine whether access should be granted; a policy may be for the authorization server to request a password and check it against an encrypted password stored in a database. Because the mechanism is generic, the policy could more easily be changed (e.g. by requiring the use of a security token) than if the mechanism and policy were integrated in the same module.
In minimal microkernel just some very basic policies are included, and its mechanisms allows what is running on top of the kernel (the remaining part of the operating system and the other applications) to decide which policies to adopt (as memory management, high level process scheduling, file system management, etc.). A monolithic kernel instead tends to include many policies, therefore restricting the rest of the system to rely on them.
Per Brinch Hansen presented arguments in favor of separation of mechanism and policy. The failure to properly fulfill this separation, is one of the major causes of the lack of substantial innovation in existing operating systems, a problem common in computer architecture. The monolithic design is induced by the "kernel mode"/"user mode" architectural approach to protection (technically called hierarchical protection domains), which is common in conventional commercial systems; in fact, every module needing protection is therefore preferably included into the kernel. This link between monolithic design and "privileged mode" can be reconducted to the key issue of mechanism-policy separation; in fact the "privileged mode" architectural approach melts together the protection mechanism with the security policies, while the major alternative architectural approach, capability-based addressing, clearly distinguishes between the two, leading naturally to a microkernel design (see Separation of protection and security).
While monolithic kernels execute all of their code in the same address space (kernel space) microkernels try to run most of their services in user space, aiming to improve maintainability and modularity of the codebase. Most kernels do not fit exactly into one of these categories, but are rather found in between these two designs. These are called hybrid kernels. More exotic designs such as nanokernels and exokernels are available, but are seldom used for production systems. The Xen hypervisor, for example, is an exokernel.
In a monolithic kernel, all OS services run along with the main kernel thread, thus also residing in the same memory area. This approach provides rich and powerful hardware access. Some developers, such as UNIX developer Ken Thompson, maintain that it is "easier to implement a monolithic kernel" than microkernels. The main disadvantages of monolithic kernels are the dependencies between system components — a bug in a device driver might crash the entire system — and the fact that large kernels can become very difficult to maintain.
Monolithic kernels, which have traditionally been used by Unix-like operating systems, contain all the operating system core functions and the device drivers (small programs that allow the operating system to interact with hardware devices, such as disk drives, video cards and printers). This is the traditional design of UNIX systems. A monolithic kernel is one single program that contains all of the code necessary to perform every kernel related task. Every part which is to be accessed by most programs which cannot be put in a library is in the kernel space: Device drivers, Scheduler, Memory handling, File systems, Network stacks. Many system calls are provided to applications, to allow them to access all those services. A monolithic kernel, while initially loaded with subsystems that may not be needed can be tuned to a point where it is as fast as or faster than the one that was specifically designed for the hardware, although more in a general sense. Modern monolithic kernels, such as those of Linux and FreeBSD, both of which fall into the category of Unix-like operating systems, feature the ability to load modules at runtime, thereby allowing easy extension of the kernel's capabilities as required, while helping to minimize the amount of code running in kernel space. In the monolithic kernel, some advantages hinge on these points:
Most work in the monolithic kernel is done via system calls. These are interfaces, usually kept in a tabular structure, that access some subsystem within the kernel such as disk operations. Essentially calls are made within programs and a checked copy of the request is passed through the system call. Hence, not far to travel at all. The monolithic Linux kernel can be made extremely small not only because of its ability to dynamically load modules but also because of its ease of customization. In fact, there are some versions that are small enough to fit together with a large number of utilities and other programs on a single floppy disk and still provide a fully functional operating system (one of the most popular of which is muLinux). This ability to miniaturize its kernel has also led to a rapid growth in the use of Linux in embedded systems.
These types of kernels consist of the core functions of the operating system and the device drivers with the ability to load modules at runtime. They provide rich and powerful abstractions of the underlying hardware. They provide a small set of simple hardware abstractions and use applications called servers to provide more functionality. This particular approach defines a high-level virtual interface over the hardware, with a set of system calls to implement operating system services such as process management, concurrency and memory management in several modules that run in supervisor mode. This design has several flaws and limitations:
Microkernel (also abbreviated μK or uK) is the term describing an approach to Operating System design by which the functionality of the system is moved out of the traditional "kernel", into a set of "servers" that communicate through a "minimal" kernel, leaving as little as possible in "system space" and as much as possible in "user space". A microkernel that is designed for a specific platform or device is only ever going to have what it needs to operate. The microkernel approach consists of defining a simple abstraction over the hardware, with a set of primitives or system calls to implement minimal OS services such as memory management, multitasking, and inter-process communication. Other services, including those normally provided by the kernel, such as networking, are implemented in user-space programs, referred to as servers. Microkernels are easier to maintain than monolithic kernels, but the large number of system calls and context switches might slow down the system because they typically generate more overhead than plain function calls.
Only parts which really require being in a privileged mode are in kernel space: IPC (Inter-Process Communication), Basic scheduler, or scheduling primitives, Basic memory handling, Basic I/O primitives. Many critical parts are now running in user space: The complete scheduler, Memory handling, File systems, and Network stacks. Micro kernels were invented as a reaction to traditional "monolithic" kernel design, whereby all system functionality was put in a one static program running in a special "system" mode of the processor. In the microkernel, only the most fundamental of tasks are performed such as being able to access some (not necessarily all) of the hardware, manage memory and coordinate message passing between the processes. Some systems that use micro kernels are QNX and the HURD. In the case of QNX and Hurd user sessions can be entire snapshots of the system itself or views as it is referred to. The very essence of the microkernel architecture illustrates some of its advantages:
Most micro kernels use a message passing system of some sort to handle requests from one server to another. The message passing system generally operates on a port basis with the microkernel. As an example, if a request for more memory is sent, a port is opened with the microkernel and the request sent through. Once within the microkernel, the steps are similar to system calls. The rationale was that it would bring modularity in the system architecture, which would entail a cleaner system, easier to debug or dynamically modify, customizable to users' needs, and more performing. They are part of the operating systems like AIX, BeOS, Hurd, Mach, Mac OS X, MINIX, QNX. Etc. Although micro kernels are very small by themselves, in combination with all their required auxiliary code they are, in fact, often larger than monolithic kernels. Advocates of monolithic kernels also point out that the two-tiered structure of microkernel systems, in which most of the operating system does not interact directly with the hardware, creates a not-insignificant cost in terms of system efficiency. These types of kernels normally provide only the minimal services such as defining memory address spaces, Inter-process communication (IPC) and the process management. The other functions such as running the hardware processes are not handled directly by micro kernels. Proponents of micro kernels point out those monolithic kernels have the disadvantage that an error in the kernel can cause the entire system to crash. However, with a microkernel, if a kernel process crashes, it is still possible to prevent a crash of the system as a whole by merely restarting the service that caused the error. Although this sounds sensible, it is questionable how important it is in reality, because operating systems with monolithic kernels such as Linux have become extremely stable and can run for years without crashing.
Other services provided by the kernel such as networking are implemented in user-space programs referred to as servers. Servers allow the operating system to be modified by simply starting and stopping programs. For a machine without networking support, for instance, the networking server is not started. The task of moving in and out of the kernel to move data between the various applications and servers creates overhead which is detrimental to the efficiency of micro kernels in comparison with monolithic kernels.
Disadvantages in the microkernel exist however. Some are:
A microkernel allows the implementation of the remaining part of the operating system as a normal application program written in a high-level language, and the use of different operating systems on top of the same unchanged kernel. It is also possible to dynamically switch among operating systems and to have more than one active simultaneously.
As the computer kernel grows, a number of problems become evident. One of the most obvious is that the memory footprint increases. This is mitigated to some degree by perfecting the virtual memory system, but not all computer architectures have virtual memory support. To reduce the kernel's footprint, extensive editing has to be performed to carefully remove unneeded code, which can be very difficult with non-obvious interdependencies between parts of a kernel with millions of lines of code.
By the early 1990s, due to the various shortcomings of monolithic kernels versus microkernels, monolithic kernels were considered obsolete by virtually all operating system researchers. As a result, the design of Linux as a monolithic kernel rather than a microkernel was the topic of a famous debate between Linus Torvalds and Andrew Tanenbaum. There is merit on both sides of the argument presented in the Tanenbaum–Torvalds debate.
Monolithic kernels are designed to have all of their code in the same address space (kernel space), which some developers argue is necessary to increase the performance of the system. Some developers also maintain that monolithic systems are extremely efficient if well-written. The monolithic model tends to be more efficient through the use of shared kernel memory, rather than the slower IPC system of microkernel designs, which is typically based on message passing.
The performance of microkernels constructed in the 1980s the year in which it started and early 1990s was poor. Studies that empirically measured the performance of these microkernels did not analyze the reasons of such inefficiency. The explanations of this data were left to "folklore", with the assumption that they were due to the increased frequency of switches from "kernel-mode" to "user-mode", to the increased frequency of inter-process communication and to the increased frequency of context switches.
In fact, as guessed in 1995, the reasons for the poor performance of microkernels might as well have been: (1) an actual inefficiency of the whole microkernel approach, (2) the particular concepts implemented in those microkernels, and (3) the particular implementation of those concepts. Therefore it remained to be studied if the solution to build an efficient microkernel was, unlike previous attempts, to apply the correct construction techniques.
On the other end, the hierarchical protection domains architecture that leads to the design of a monolithic kernel has a significant performance drawback each time there's an interaction between different levels of protection (i.e. when a process has to manipulate a data structure both in 'user mode' and 'supervisor mode'), since this requires message copying by value.
By the mid-1990s, most researchers had abandoned the belief that careful tuning could reduce this overhead dramatically, but recently, newer microkernels, optimized for performance, such as L4 and K42 have addressed these problems.[verification needed]
Hybrid kernels are used in most commercial operating systems such as Microsoft Windows NT, 2000, XP, Vista, and 7. Apple Inc's own Mac OS X uses a hybrid kernel called XNU which is based upon code from Carnegie Mellon's Mach kernel and FreeBSD's monolithic kernel. They are similar to micro kernels, except they include some additional code in kernel-space to increase performance. These kernels represent a compromise that was implemented by some developers before it was demonstrated that pure micro kernels can provide high performance. These types of kernels are extensions of micro kernels with some properties of monolithic kernels. Unlike monolithic kernels, these types of kernels are unable to load modules at runtime on their own. Hybrid kernels are micro kernels that have some "non-essential" code in kernel-space in order for the code to run more quickly than it would were it to be in user-space. Hybrid kernels are a compromise between the monolithic and microkernel designs. This implies running some services (such as the network stack or the filesystem) in kernel space to reduce the performance overhead of a traditional microkernel, but still running kernel code (such as device drivers) as servers in user space.
Many traditionally monolithic kernels are now at least adding (if not actively exploiting) the module capability. The most well known of these kernels is the Linux kernel. The modular kernel essentially can have parts of it that are built into the core kernel binary or binaries that load into memory on demand. It is important to note that a code tainted module has the potential to destabilize a running kernel. Many people become confused on this point when discussing micro kernels. It is possible to write a driver for a microkernel in a completely separate memory space and test it before "going" live. When a kernel module is loaded, it accesses the monolithic portion's memory space by adding to it what it needs, therefore, opening the doorway to possible pollution. A few advantages to the modular (or) Hybrid kernel are:
Modules, generally, communicate with the kernel using a module interface of some sort. The interface is generalized (although particular to a given operating system) so it is not always possible to use modules. Often the device drivers may need more flexibility than the module interface affords. Essentially, it is two system calls and often the safety checks that only have to be done once in the monolithic kernel now may be done twice. Some of the disadvantages of the modular approach are:
A nanokernel delegates virtually all services — including even the most basic ones like interrupt controllers or the timer — to device drivers to make the kernel memory requirement even smaller than a traditional microkernel.
Exokernels are a still experimental approach to operating system design. They differ from the other types of kernels in that their functionality is limited to the protection and multiplexing of the raw hardware, providing no hardware abstractions on top of which to develop applications. This separation of hardware protection from hardware management enables application developers to determine how to make the most efficient use of the available hardware for each specific program.
Exokernels in themselves are extremely small. However, they are accompanied by library operating systems, providing application developers with the functionalities of a conventional operating system. A major advantage of exokernel-based systems is that they can incorporate multiple library operating systems, each exporting a different API, for example one for high level UI development and one for real-time control.
Strictly speaking, an operating system (and thus, a kernel) is not required to run a computer. Programs can be directly loaded and executed on the "bare metal" machine, provided that the authors of those programs are willing to work without any hardware abstraction or operating system support. Most early computers operated this way during the 1950s and early 1960s, which were reset and reloaded between the execution of different programs. Eventually, small ancillary programs such as program loaders and debuggers were left in memory between runs, or loaded from ROM. As these were developed, they formed the basis of what became early operating system kernels. The "bare metal" approach is still used today on some video game consoles and embedded systems, but in general, newer computers use modern operating systems and kernels.
In 1969 the RC 4000 Multiprogramming System introduced the system design philosophy of a small nucleus "upon which operating systems for different purposes could be built in an orderly manner", what would be called the microkernel approach.
In the decade preceding Unix, computers had grown enormously in power — to the point where computer operators were looking for new ways to get people to use the spare time on their machines. One of the major developments during this era was time-sharing, whereby a number of users would get small slices of computer time, at a rate at which it appeared they were each connected to their own, slower, machine.
The development of time-sharing systems led to a number of problems. One was that users, particularly at universities where the systems were being developed, seemed to want to hack the system to get more CPU time. For this reason, security and access control became a major focus of the Multics project in 1965. Another ongoing issue was properly handling computing resources: users spent most of their time staring at the screen and thinking instead of actually using the resources of the computer, and a time-sharing system should give the CPU time to an active user during these periods. Finally, the systems typically offered a memory hierarchy several layers deep, and partitioning this expensive resource led to major developments in virtual memory systems.
The Commodore Amiga was released in 1985, and was among the first (and certainly most successful) home computers to feature a hybrid architecture. The Amiga's kernel executive component, exec.library, uses microkernel message passing design but there are other kernel components, like graphics.library, that had a direct access to the hardware. There is no memory protection and the kernel is almost always running in a user mode. Only special actions are executed in kernel mode and user mode applications can ask operating system to execute their code in kernel mode.
For instance, printers were represented as a "file" at a known location — when data was copied to the file, it printed out. Other systems, to provide a similar functionality, tended to virtualize devices at a lower level — that is, both devices and files would be instances of some lower level concept. Virtualizing the system at the file level allowed users to manipulate the entire system using their existing file management utilities and concepts, dramatically simplifying operation. As an extension of the same paradigm, Unix allows programmers to manipulate files using a series of small programs, using the concept of pipes, which allowed users to complete operations in stages, feeding a file through a chain of single-purpose tools. Although the end result was the same, using smaller programs in this way dramatically increased flexibility as well as ease of development and use, allowing the user to modify their workflow by adding or removing a program from the chain.
In the Unix model, the Operating System consists of two parts; first, the huge collection of utility programs that drive most operations, the other the kernel that runs the programs. Under Unix, from a programming standpoint, the distinction between the two is fairly thin; the kernel is a program, running in supervisor mode, that acts as a program loader and supervisor for the small utility programs making up the rest of the system, and to provide locking and I/O services for these programs; beyond that, the kernel didn't intervene at all in user space.
Over the years the computing model changed, and Unix's treatment of everything as a file or byte stream no longer was as universally applicable as it was before. Although a terminal could be treated as a file or a byte stream, which is printed to or read from, the same did not seem to be true for a graphical user interface. Networking posed another problem. Even if network communication can be compared to file access, the low-level packet-oriented architecture dealt with discrete chunks of data and not with whole files. As the capability of computers grew, Unix became increasingly cluttered with code. It is also because the modularity of the Unix kernel is extensively scalable. While kernels might have had 100,000 lines of code in the seventies and eighties, kernels of modern Unix successors like Linux have more than 13 million lines.
Modern Unix-derivatives are generally based on module-loading monolithic kernels. Examples of this are the Linux kernel in its many distributions as well as the Berkeley software distribution variant kernels such as FreeBSD, DragonflyBSD, OpenBSD, NetBSD, and Mac OS X. Apart from these alternatives, amateur developers maintain an active operating system development community, populated by self-written hobby kernels which mostly end up sharing many features with Linux, FreeBSD, DragonflyBSD, OpenBSD or NetBSD kernels and/or being compatible with them.
Apple Computer first launched Mac OS in 1984, bundled with its Apple Macintosh personal computer. Apple moved to a nanokernel design in Mac OS 8.6. Against this, Mac OS X is based on Darwin, which uses a hybrid kernel called XNU, which was created combining the 4.3BSD kernel and the Mach kernel.
Microsoft Windows was first released in 1985 as an add-on to MS-DOS. Because of its dependence on another operating system, initial releases of Windows, prior to Windows 95, were considered an operating environment (not to be confused with an operating system). This product line continued to evolve through the 1980s and 1990s, culminating with release of the Windows 9x series (upgrading the system's capabilities to 32-bit addressing and pre-emptive multitasking) through the mid-1990s and ending with the release of Windows Me in 2000. Microsoft also developed Windows NT, an operating system intended for high-end and business users. This line started with the release of Windows NT 3.1 in 1993, and has continued through the years of 2000 with Windows 7 and Windows Server 2008.
The release of Windows XP in October 2001 brought the NT kernel version of Windows to general users, replacing Windows 9x with a completely different operating system. The architecture of Windows NT's kernel is considered a hybrid kernel because the kernel itself contains tasks such as the Window Manager and the IPC Managers, with a client/server layered subsystem model.
Although Mach, developed at Carnegie Mellon University from 1985 to 1994, is the best-known general-purpose microkernel, other microkernels have been developed with more specific aims. The L4 microkernel family (mainly the L3 and the L4 kernel) was created to demonstrate that microkernels are not necessarily slow. Newer implementations such as Fiasco and Pistachio are able to run Linux next to other L4 processes in separate address spaces.
QNX is a real-time operating system with a minimalistic microkernel design that has been developed since 1982, having been far more successful than Mach in achieving the goals of the microkernel paradigm. It is principally used in embedded systems and in situations where software is not allowed to fail, such as the robotic arms on the space shuttle and machines that control grinding of glass to extremely fine tolerances, where a tiny mistake may cost hundreds of thousands of dollars.
|Wikiversity has learning materials about Kernel Models at|
Here you can share your comments or contribute with more information, content, resources or links about this topic. | 1 | 10 |
<urn:uuid:f0a0a5c0-6125-4ba1-b3f5-b8bfdd8dc2f7> | I'm a member
You will be redirected to myBlue. Would you like to continue?
Please wait while you are redirected.
Printer Friendly Version
DESCRIPTIONCochlear implant is a device for individuals with severe to profound hearing loss who only receive limited benefit from amplification with hearing aids. A cochlear implant provides direct electrical stimulation to the auditory nerve, bypassing the usual transducer cells that are absent or nonfunctional in deaf cochlea. The basic components of a cochlear implant include both external and internal components. The external components include a microphone, an external sound processor, and an external transmitter. The internal components are implanted surgically and include an internal receiver implanted within the temporal bone, and an electrode array that extends from the receiver into the cochlea through a surgically created opening in the round window of the middle ear.
Sounds that are picked up by the microphone are carried to the external sound processor, which transforms sound into coded signals that are then transmitted transcutaneously to the implanted internal receiver. The receiver converts the incoming signals to electrical impulses that are then conveyed to the electrode array, ultimately resulting in stimulation of the auditory nerve.
Several cochlear implants are commercially available in the United States, the Nucleus family of devices, manufactured by Cochlear Corporation; the Clarion family of devices, manufactured by Advanced Bionics; the Med El Combi 40+ device, manufactured by Med El. Over the years, subsequent generations of the various components of the devices have been approved by the U.S. Food and Drug Administration (FDA), focusing on improved electrode design and speech-processing capabilities. Furthermore, smaller devices and the accumulating experience in children have resulted in broadening of the selection criteria to include children as young as 12 months. The FDA-labeled indications for currently marketed electrode arrays are summarized below.
FDA Approval Status of Currently Marketed Cochlear Electrodes
*The Clarion CII Bionic Ear System is composed of a Clarion HiFocus electrode in conjunction with a next generation internal transmitter. Cochlear Hybrid™ (Cochlear, Inc.) is an electro-acoustic stimulation device currently undergoing clinical trials and as of September 2009 has not received FDA approval.
While cochlear implants have typically been used monolaterally, in recent years, interest in bilateral cochlear implantation has arisen. The proposed benefits of bilateral cochlear implants are to improve understanding of speech in noise and localization of sounds. Improvements in speech intelligibility may occur with bilateral cochlear implants through binaural summation; i.e., signal processing of sound input from 2 sides may provide a better representation of sound and allow one to separate out noise from speech. Speech intelligibility and localization of sound or spatial hearing may also be improved with head shadow and squelch effects, i.e., the ear that is closes to the noise will be received at a different frequency and with differenct intensity, allowing one to sort out noise and identify the direction of sound. Bilateral cochlear implantation may be performed independently with separate implants and speech processors in each ear or with a single processor. There is, of course, a substantial risk of a second surgery, infection, facial nerve damage, reduced vestibular function and destruction of the inner ear with a second implant and there is only marginal hearing improvement going from one to two implants. However, no single processor for bilateral cochlear implantation has been FDA approved for use in the United States. Additionally, single processors do not provide binaural benefit and may impair sound localization and increase the signal to noise ratio received by the cochlear implant.
POLICYUnilateral or bilateral cochlear implantation of a U.S. Food and Drug Administration (FDA) approved cochlear implant device may be considered medically necessary in patients age 12 months and older with bilateral severe-to-profound pre-or post-lingual (sensorineural) hearing loss defined as a hearing threshold of pure-tone average of 70 dB (decibels) hearing loss or greater at 500 HZ (hertz), 100 HZ and 2000 Hz, and have shown limited or no benefit from hearing aids.
Upgrades of an existing, functioning external system to achieve aesthetic improvement, such as smaller profile components or a switch from a body-worn, external sound processor to a behind-the-ear (BTE) model, are considered not medically necessary.
Note: Auditory Brain Stem Implant, designed to restore hearing in patients with neurofibromatosis who are deaf secondary to removal of bilateral acoustic neuromas, is not addressed in this policy.
POLICY EXCEPTIONSFederal Employee Program (FEP) may dictate that all FDA-approved devices, drugs or biologics may not be considered investigational and thus these devices may be assessed only on the basis of their medical necessity.
POLICY GUIDELINESBilateral cochlear implantation should be considered only when it has been determined that the alternative of unilateral cochlear implant plus hearing aid in the contralateral ear will not result in a binaural benefit; i.e., in those patients with hearing loss of a magnitude where a hearing aid will not produce the required amplification.
Hearing loss is rated on a scale based on the threshold of hearing. Severe hearing loss is defined as a bilateral hearing threshold of 70-90 dB and profound hearing loss is defined as a bilateral hearing threshold of 90 dB and above.
In adults, limited benefit from hearing aids is defined as scores 50% correct or less in the ear to be implanted on tape recorded sets of open-set sentence recognition. In children, limited benefit is defined as failure to develop basic auditory skills, and in older children, <30% correct on open-set tests.
A post-cochlear implant rehabilitation program is necessary to achieve benefit from the cochlear implant. The rehabilitation program consists of 6 to 10 sessions that last approximately 2½ hours each. The rehabilitation program includes development of skills in understanding running speech, recognition of consonants and vowels, and tests of speech perception ability.
Contraindications to cochlear implantation may include deafness due to lesions of the eighth cranial nerve or brain stem, chronic infections of the middle ear and mastoid cavity or tympanic membrane perforation. The absence of cochlear development as demonstrated on CT scans remains an absolute contraindication.
Investigative service is defined as the use of any treatment procedure, facility, equipment, drug, device, or supply not yet recognized by certifying boards and/or approving or licensing agencies or published peer review criteria as standard, effective medical practice for the treatment of the condition being treated and as such therefore is not considered medically necessary.
The coverage guidelines outlined in the Medical Policy Manual should not be used in lieu of the Member's specific benefit plan language.
POLICY HISTORY7/1992: Approved by Medical Policy Advisory Committee (MPAC)
12/30/1999: Policy Guidelines updated
9/21/2001:Policy rewritten to be reflective of Blue Cross Blue Shield Association policy # 7.01.05, Code Reference section updated, CPT code 92507, 92510 added
11/2001: Reviewed by MPAC; revisions approved
4/18/2002: Type of Service and Place of Service deleted
5/29/2002: Code Reference section updated, CPT code 69949 added, HCPCS L8619, V5269, V5273, V5299, V5336, V5362, V5363 added
3/6/2003: Code Reference section updated, CPT code 92601, 92602, 92603, 92604 added
7/15/2004: Reviewed by MPAC, bilateral cochlear implantation considered investigational, Description section aligned with BCBSA policy # 7.01.05, definition of investigational added Policy Guidelines, Sources updated
10/5/2004: Code Reference section updated, CPT code 69949 deleted, CPT 92507 description revised, CPT 92508 added, ICD-9 procedure code 20.96, 20.97, 20.99, 95.49 added, ICD-9 diagnosis code range 389.10-389.18 listed separately, ICD-9 diagnosis 389.7 added, HCPCS L8619 note added, HCPCS V5269, V5273, V5299, V5336, V5362, V5363 deleted
3/22/2005: Code Reference section updated, CPT code 92510 description revised, HCPCS L8615, L8616, L8617, L8618 with Note: "See POLICY GUIDELINES for information regarding replacement of the external component of the cochlear implant" and effective date of 1/1/2005 added.
11/15/2005: HCPCS codes K0731, K0732, L8620 added
03/10/2006: Coding updated. CPT4 / HCPCS 2006 revisions added to policy
03/13/2006: Policy reviewed, no changes
09/13/2006: Coding updated. ICD9 2006 revisions added to policy
12/27/2006: Code Reference section updated per the 2007 HCPCS revisions
3/27/2007: Policy reviewed, no changes to policy statement. Bilateral cochlear implantation added to Policy Guidelines section
06/26/2007: Policy statement updated; bilateral cochlear implantation changed from investigational to may be considered medically necessary
7/19/2007: Reviewed and approved by MPAC
9/18/2007: Code reference section updated. ICD-9 2007 revisions added to policy
1/7/2009: Policy reviewed, policy section partially rewritten and clarified.
3/12/2010: Code Reference section updated. New HCPCS codes L8627, L8628 and L8629 added to covered table.
04/26/2010: Policy description updated regarding devices. Policy statements modified for clarity; intent unchanged. Contraindications to cochlear implantation added to the policy guidelines. FEP verbiage added to the Policy Exceptions section. Deleted outdated references from the Sources section. Removed deleted CPT code 92510 from the codes table as this code was deleted on 12/31/2005.
08/02/2011: Policy reviewed; no changes.
07/17/2012: Policy reviewed; no changes.
05/08/2013: Removed deleted CPT code 92510 from the Code Reference section.
SOURCE(S)Blue Cross Blue Shield Association policy # 7.01.05
CODE REFERENCEThis is not intended to be a comprehensive list of codes. Some covered procedure codes have multiple descriptions.
The code(s) listed below are ONLY covered if the procedure is performed according to the "Policy" section of this document. | 2 | 13 |
<urn:uuid:5f13ddd6-6d8e-4fc3-aaee-1e00ad782d1d> | |Philip the Arab • John of Damascus • Al-Kindi • Al-Khansa
Faisal I of Iraq • Gamal Abdel Nasser • Asmahan • May Ziade
|approx. 300 million|
|Regions with significant populations|
|Related ethnic groups|
Arab people, also known as Arabs (Arabic: عرب, ʿarab), are a panethnicity primarily living in the Arab world, which is located in Western Asia and North Africa. They are identified as such on one or more of genealogical, linguistic, or cultural grounds, with tribal affiliations, and intra-tribal relationships playing an important part of Arab identity.
The word "Arab" has had several different, but overlapping, meanings over the centuries (and sometimes even today). In addition to including all Arabized people of the world (with language tending to be the acid test), it has also at times been used exclusively for bedouin (Arab nomads [although a related word, "`a-RAB," with the Arabic letter "alif" in the second syllable, once was sometimes used when this specific meaning was intended] and their now almost entirely settled descendants). It is sometimes used that way colloquially even today in some places. Townspeople once were sometimes called "sons of the Arabs." As in the case of other ethnicities or nations, people identify themselves (or are identified by others) as "Arabs" to varying degrees. This may not be one's primary identity (it tends to compete with country, religion, sect, etc.), and whether it is emphasized may depend upon one's audience.
If the diverse Arab pan-ethnicity is regarded as a single ethnic group, then it constitutes one of the world's largest after Han Chinese.
The earliest documented use of the word "Arab" to refer to a people appears in the Monolith Inscription, an Akkadian language record of the 9th century BC Assyrian Conquest of Syria (Arabs had formed part of a coalition of forces opposed to Assyria). Listed among the booty captured by the army of king Shalmaneser III of Assyria in the Battle of Qarqar are 1000 camels of "Gi-in-di-bu'u the ar-ba-a-a" or "[the man] Gindibu belonging to the ʕarab" (ar-ba-a-a being an adjectival nisba of the noun ʕarab).
The most popular Arab account holds that the word 'Arab' came from an eponymous father called Yarab, who was supposedly the first to speak Arabic. Al-Hamdani had another view; he states that Arabs were called GhArab (West in Semitic) by Mesopotamians because Arabs resided to the west of Mesopotamia; the term was then corrupted into Arab. Yet another view is held by Al-Masudi that the word Arabs was initially applied to the Ishmaelites of the "Arabah" valley.
The root of the word has many meanings in Semitic languages including "west/sunset," "desert," "mingle," "merchant," "raven" and are "comprehensible" with all of these having varying degrees of relevance to the emergence of the name. It is also possible that some forms were metathetical from ʿ-B-R "moving around" (Arabic ʿ-B-R "traverse"), and hence, it is alleged, "nomadic."
Arab identity is defined independently of religious identity, and pre-dates the rise of Islam, with historically attested Arab Christian kingdoms and Arab Jewish tribes. Today, however, most Arabs are Muslim, with a minority adhering to other faiths, largely Christianity. Arabs are generally Sunni, Shia or Sufi Muslims, but currently, 7.1 percent to 10 percent of Arabs are Arab Christians. This figure does not include Christian ethnic groups such as Assyrians, and Syriacs.
The early Arabs were the tribes of Northern Arabia speaking proto Arabic dialects. Although since early days other people became Arabs through an Arabization process that could mean intermarriage with Arabs, adopting the Arabic language and culture, or both. For example, the Ghassanids and the Lakhmids which originated from Southern Semitic speaking Yemen made a major contribution in the creation of the Arabic language. The same process happened all over the Arab world after the spread of Islam by the mixing of Arabs with several other peoples. The Arab cultures went through a mixing process. Therefore every Arab country has cultural specificities which constitute a cultural mix which also originate in local novelties achieved after the arabization took place. However, all Arab countries do also share a common culture in most Aspects: Arts (music, literature, poetry, calligraphy...), Cultural products (Handicrafts, carpets, henne, bronze carving...), Social behaviour and relations (Hospitality, codes of conduct among friends and family...), Customs and superstitions, Some dishes (Shorba, Mloukhia), Traditional clothing, Architecture...
Non-Arab Muslims, who are about 80 percent of the world's Muslim population, do not form part of the Arab world, but instead comprise what is the geographically larger, and more diverse, Muslim World.
In the USA, Arabs have historically been racially classified as white/Caucasian and, since 1997, by the U.S. Census as well.
Arabic, the main unifying feature among Arabs, is a Semitic language originating in Arabia. From there it spread to a variety of distinct peoples across most of West Asia and North Africa, resulting in their acculturation and eventual denomination as Arabs. Arabization, a culturo-linguistic shift, was often, though not always, in conjunction with Islamization, a religious shift.
With the rise of Islam in the 7th century, and as the language of the Qur'an, Arabic became the lingua franca of the Islamic world. (See Anwar G. Chegne, "Arabic: Its Significance and Place in Arab-Muslim Society," Middle East Journal 19 (Autumn 1965), pp. 447–470.) It was in this period that Arabic language and culture was widely disseminated with the early Islamic expansion, both through conquest and cultural contact.
Arabic culture and language, however, began a more limited diffusion before the Islamic age, first spreading in West Asia beginning in the 2nd century, as Arab Christians such as the Ghassanids, Lakhmids and Banu Judham began migrating north from Arabia into the Syrian Desert, south western Iraq and the Levant.
In the modern era, defining who is an Arab is done on the grounds of one or more of the following two criteria:
The relative importance of these factors is estimated differently by different groups and frequently disputed. Some combine aspects of each definition, as done by Palestinian Habib Hassan Touma, who defines an Arab "in the modern sense of the word", as "one who is a national of an Arab state, has command of the Arabic language, and possesses a fundamental knowledge of Arab tradition, that is, of the manners, customs, and political and social systems of the culture." Most people who consider themselves Arab do so based on the overlap of the political and linguistic definitions.
An Arab is a person whose language is Arabic, who lives in an Arabic-speaking country, and who is in sympathy with the aspirations of the Arabic-speaking peoples.
According to Sadek Jawad Sulaimanis the former Ambassador of Oman to the United States:
The Arabs are defined by their culture, not by race; and their culture is defined by its essential twin constituents of Arabism and Islam. To most of the Arabs, Islam is their indigenous religion; to all of the Arabs, Islam is their indigenous civilization. The Arab identity, as such, is a culturally defined identity, which means being Arab is being someone whose mother culture, or dominant culture, is Arabism. Beyond that, he or she might be of any ancestry, of any religion or philosophical persuasion, and a citizen of any country in the world. Being Arab does not contradict with being non-Muslim or non-Semitic or not being a citizen of an Arab state.
The relation of ʿarab and ʾaʿrāb is complicated further by the notion of "lost Arabs" al-ʿArab al-ba'ida mentioned in the Qur'an as punished for their disbelief. All contemporary Arabs were considered as descended from two ancestors, Qahtan and Adnan.
Versteegh (1997) is uncertain whether to ascribe this distinction to the memory of a real difference of origin of the two groups, but it is certain that the difference was strongly felt in early Islamic times. Even in Islamic Spain there was enmity between the Qays of the northern and the Kalb of the southern group. The so-called Sabaean or Himyarite language described by Abū Muhammad al-Hasan al-Hamdānī (died 946) appears to be a special case of language contact between the two groups, an originally north Arabic dialect spoken in the south, and influenced by Old South Arabian.[dubious ]
During the Muslim conquests of the 7th and 8th centuries, the Arabs forged an Arab Empire (under the Rashidun and Umayyads, and later the Abbasids) whose borders touched southern France in the west, China in the east, Asia Minor in the north, and the Sudan in the south. This was one of the largest land empires in history. In much of this area, the Arabs spread Islam and the Arabic culture, science, and language (the language of the Qur'an) through conversion and cultural assimilation.
Two references valuable for understanding the political significance of Arab identity: Michael C. Hudson, Arab Politics: The Search for Legitimacy (Yale University Press, 1977), especially Chs. 2 and 3; and Michael N. Barnett, Dialogues in Arab Politics: Negotiations in Regional Order (Columbia University Press, 1998).
The table below shows the number of Arab people, including expatriates and some groups that may not be identified as Arabs.
The total number of Arabs living in the Arab nations is 366,117,749. The total number lining in non-Arab majority states is 17,474,000. The worldwide total is 383,591,749.
|Flag||Country||Total Population||% Arabs||Notes|
|Egypt||83,688,164||90%||The common consensus among Egyptians is that this classification is tied to them to the use of Arabic as an official language in Egypt. The Egyptian dialect of Arabic include thousands of Coptic words. Ninety percent of the population is Eastern Hamitic.|
|Algeria||37,367,226||70%||Algerians with most of their tribes have Arab and Berber background|
|Morocco||32,309,239||66%||The high level of mixing between Arabs and Berbers makes differentiating between the two ethnicities in Morocco difficult. This figure includes people of mixed Berber and Arab descent.|
|Iraq||31,467,000||75-80%||Iraqis are primarily descended from Iraq's original Mesopotamian population. The dialect of Arabic spoken by Iraqis (Mesopotamian Arabic) has an Aramaic substratum and retains vocabulary of Akkadian and Sumerian provenance. Many Iraqis look to Babylonia, Assyria, and Sumer for their origins and have a sense of Mesopotamian ethnicity, though generally not in antithesis to Arab cultural identification.|
|Saudi Arabia||26,246,000||90%||Saudis are of Arabian or Bedouin ancestry|
|Sudan||25,946,220||70%||Arabs and Bedouin are by far the largest ethnic group, among 597 tribes.|
|Syria||22,505,000||90.3%||Syrians are primarily descended from the ancient peoples of Syria. The Syrian dialect of Arabic has an Aramaic substratum like other dialects of Levantine Arabic and Mesopotamian Arabic. The Aramaeans were one of the peoples of ancient Syria and in antiquity Syria was known as Aram. The Aramaic language of the Aramaeans became the regional lingua franca during the early 1st millennium BC, and it remained so until replaced in this role by Arabic in the 8th century AD.|
|Tunisia||10,374,000||98%||Almost all of Tunisia's citizenry has Arab and Berber background. Because of the high degree of assimilation Tunisians are often referred to as Arab-Berber.|
|Libya||6,546,000||97%||Almost all of Libya's citizenry has Arab and Berber background. Because of the high degree of assimilation Libyans are often referred to as Arab-Berber.|
|Palestine||4,225,710||89%||Gaza Strip: 1,657,155, 100% Palestinian Arab, West Bank: 2,568,555, 83% Palestinian Arab and other|
|UAE||4,707,000||40%||Less than 20% of the population in the Emirates are citizens, the majority are foreign workers and expatriates. Those holding Emirati citizenship are overwhelmingly Arab.|
|Mauritania||3,343,000||80%||The majority of Mauritania's population are ethnic Moors, an ethnicity with a mix of Arab and Berber ancestry, with a smaller Black African ancestry. Moors make up 80% of the population in Mauritania, the remaining 20% are members of a number of Black African ethnic groups.[dubious ]|
|Qatar||1,508,000||55%||The native population is a minority in Qatar, making up 20% of the population. The native population is ethnically Arab. An additional 35% of the population is made up of Arabs, mostly Egyptian and Palestinian workers. The remaining population is made up of other foreign workers.|
|Bahrain||1,234,571||51.4%||46.0% of the Bahrain's population are native Bahrainis. Bahrainis are ethnically Arabs. 5.4% are Other Arabs (inc. GCC)|
|Western Sahara||663.000||80%||Ethnically Western Sahara is inhabited by Arab-berbers. Two languages are widely spoken Hassaniya Arabic and Moroccan Arabic.|
|Flag||Country||Number of Arabs||Total Population||% Arabs||Notes|
According to the International Organization for Migration, there are 13 million first-generation Arab migrants in the world, of which 5.8 reside in Arab countries. Arab expatriates contribute to the circulation of financial and human capital in the region and thus significantly promote regional development. In 2009 Arab countries received a total of 35.1 billion USD in remittance in-flows and remittances sent to Jordan, Egypt and Lebanon from other Arab countries are 40 to 190 per cent higher than trade revenues between these and other Arab countries.
The 250,000 strong Lebanese community in West Africa is the largest non-African group in the region.
Arab traders have long operated in Southeast Asia and along the East Africa's Swahili coast. Zanzibar was once ruled by Omani Arabs. Most of the prominent Indonesians, Malaysians, and Singaporeans of Arab descent are Hadhrami people with origins in southern Yemen in the Hadramawt coastal region.
Central Asia and Caucasus
In 1728, a Russian officer described a group of Sunni Arab nomads who populated the Caspian shores of Mughan (in present-day Azerbaijan) and spoke a mixed Turkic-Arabic language. It is believed that these groups migrated to the Caucasus in the 16th century. The 1888 edition of Encyclopædia Britannica also mentioned a certain number of Arabs populating the Baku Governorate of the Russian Empire. They retained an Arabic dialect at least into the mid-19th century, but since then have fully assimilated with the neighbouring Azeris and Tats. Today in Azerbaijan alone, there are nearly 30 settlements still holding the name Arab (for example, Arabgadim, Arabojaghy, Arab-Yengija, etc.).
From the time of the Arab conquest of the Caucasus, continuous small-scale Arab migration from various parts of the Arab world was observed in Dagestan influencing and shaping the culture of the local peoples. Up until the mid-20th century, there were still individuals in Dagestan who claimed Arabic to be their native language, with the majority of them living in the village of Darvag to the north-west of Derbent. The latest of these accounts dates to the 1930s. Most Arab communities in southern Dagestan underwent linguistic Turkicisation, thus nowadays Darvag is a majority-Azeri village.
According to the History of Ibn Khaldun, the Arabs that were once in Central Asia have been either killed or have fled the Tatar invasion of the region, leaving only the locals . However, today many people in Central Asia identify as Arabs. Most Arabs of Central Asia are fully integrated into local populations, and sometimes call themselves the same as locals (for example, Tajiks, Uzbeks) but they use special titles to show their Arabic origin such as Sayyid, Khoja or Siddiqui.
There are only two communities with the self-identity Arab in South Asia, the Chaush of the Deccan region and the Chavuse of Gujerat, who are by and large descended of Hadhrami migrants who settled in these two regions in the 18th Centuries. However, both these communities no longer speak Arabic, although with the Chaush, there has been re-immigration to the Gulf States, and re-adoption of Arabic by these immigrants. In South Asia, claiming Arab ancestry is considered prestigious, and many communities have origin myths with claim to an Arab ancestry. Examples include the Mappilla of Kerala, Labbai of Tamil Nadu and Kokan of Maharashtra. These communities all allege an Arab ancestry, but none speak Arabic and follow the customs and traditions of the Hindu majority. Among Muslims of North India, Arabs in Pakistan Pakistan and Afghanistan, there are groups who claim the status of Sayyid, have origin myths that allege descent from the Prophet Mohammmad. None of these Sayyid families speak Arabic or follow Arab customs or traditions.
There is a consensus that the Semitic peoples originated from Arabian peninsula. It should be pointed out that these settlers were not Arabs or Arabic speakers. Early non Arab Semitic peoples from the Ancient Near East, such as the Arameans, Akkadians (Assyrians and Babylonians), Amorites, Israelites, Eblaites, Ugarites and Canaanites, built civilizations in Mesopotamia and the Levant; genetically, they often interlapped and mixed. Slowly, however, they lost their political domination of the Near East due to internal turmoil and attacks by non-Semitic peoples. Although the Semites eventually lost political control of Western Asia to the Persian Empire, the Aramaic language remained the lingua franca of Assyria, Mesopotamia and the Levant. Aramaic itself was replaced by Greek as Western Asia's prestige language following the conquest of Alexander III of Macedon, though it survives to this day among Assyrian (aka Chaldo-Assyrian) Christians and Mandeans in Iraq, northeast Syria, southeast Turkey and northwest Iran.
The first written attestation of the ethnonym "Arab" occurs in an Assyrian inscription of 853 BCE, where Shalmaneser III lists a King Gindibu of mâtu arbâi (Arab land) as among the people he defeated at the Battle of Karkar. Some of the names given in these texts are Aramaic, while others are the first attestations of Ancient North Arabian dialects. In fact several different ethnonyms are found in Assyrian texts that are conventionally translated "Arab": Arabi, Arubu, Aribi and Urbi. Many of the Qedarite queens were also described as queens of the aribi. The Hebrew Bible occasionally refers to Aravi peoples (or variants thereof), translated as "Arab" or "Arabian." The scope of the term at that early stage is unclear, but it seems to have referred to various desert-dwelling Semitic tribes in the Syrian Desert and Arabia. Arab tribes came into conflict with the Assyrians during the reign of the Assyrian king Ashurbanipal, and he records military victories against the powerful Qedar tribe among others.
Medieval Arab genealogists divided Arabs into three groups:
Book of Jubilees 20:13 And Ishmael and his sons, and the sons of Keturah and their sons, went together and dwelt from Paran to the entering in of Babylon in all the land which is towards the East facing the desert. And these mingled with each other, and their name was called Arabs, and Ishmaelites.
Ibn Khaldun's Muqaddima distinguishes between sedentary Muslims who used to be nomadic Arabs and the Bedouin nomadic Arabs of the desert. He used the term "formerly-nomadic" Arabs and refers to sedentary Muslims by the region or city they lived in, as in Egyptians, Spaniards and Yemenis. The Christians of Italy and the Crusaders preferred the term Saracens for all the Arabs and Muslims of that time. The Christians of Iberia used the term Moor to describe all the Arabs and Muslims of that time. Muslims of Medina referred to the nomadic tribes of the deserts as the A'raab, and considered themselves sedentary, but were aware of their close racial bonds. The term "A'raab' mirrors the term Assyrians used to describe the closely related nomads they defeated in Syria.
The Qur'an does not use the word ʿarab, only the nisba adjective ʿarabiy. The Qur'an calls itself ʿarabiy, "Arabic", and Mubin, "clear". The two qualities are connected for example in ayat 43.2–3, "By the clear Book: We have made it an Arabic recitation in order that you may understand". The Qur'an became regarded as the prime example of the al-ʿarabiyya, the language of the Arabs. The term ʾiʿrāb has the same root and refers to a particularly clear and correct mode of speech. The plural noun ʾaʿrāb refers to the Bedouin tribes of the desert who resisted Muhammad, for example in ayat 9.97, alʾaʿrābu ʾašaddu kufrān wa nifāqān "the Bedouin are the worst in disbelief and hypocrisy".
Based on this, in early Islamic terminology, ʿarabiy referred to the language, and ʾaʿrāb to the Arab Bedouins, carrying a negative connotation due to the Qur'anic verdict just cited. But after the Islamic conquest of the 8th century, the language of the nomadic Arabs became regarded as the most pure by the grammarians following Abi Ishaq, and the term kalam al-ʿArab, "language of the Arabs", denoted the uncontaminated language of the Bedouins.
Proto-Arabic, or Ancient North Arabian, texts give a clearer picture of the Arabs' emergence. The earliest are written in variants of epigraphic south Arabian musnad script, including the 8th century BCE Hasaean inscriptions of eastern Saudi Arabia, the 6th century BCE Lihyanite texts of southeastern Saudi Arabia and the Thamudic texts found throughout Arabia and the Sinai (not in reality connected with Thamud).
The Nabataeans were nomadic newcomers[dubious ] who moved into territory vacated by the Edomites – Semites who settled the region centuries before them. Their early inscriptions were in Aramaic, but gradually switched to Arabic, and since they had writing, it was they who made the first inscriptions in Arabic. The Nabataean Alphabet was adopted by Arabs to the south, and evolved into modern Arabic script around the 4th century. This is attested by Safaitic inscriptions (beginning in the 1st century BCE) and the many Arabic personal names in Nabataean inscriptions. From about the 2nd century BCE, a few inscriptions from Qaryat al-Faw (near Sulayyil) reveal a dialect which is no longer considered "proto-Arabic", but pre-classical Arabic. Five Syriac inscriptions mentioning Arabs have been found at Sumatar Harabesi, one of which has been dated to the 2nd century CE.
Greeks and Romans referred to all the nomadic population of the desert in the Near East as Arabi. The Romans called Yemen "Arabia Felix". The Romans called the vassal nomadic states within the Roman Empire "Arabia Petraea" after the city of Petra, and called unconquered deserts bordering the empire to the south and east Arabia Magna.
Rashidun Era (632-661)
After the death of Muhammad in 632, Rashidun armies launched campaigns of conquest, establishing the Caliphate, or Islamic Empire, one of the largest empires in history. It was larger and lasted longer than the previous Arab empires of Queen Mawia or the Palmyrene Empire which was predominantly Syriac rather than Arab. The Rashidun state was a completely new state and not a mere imitation of the earlier Arab kingdoms such as the Himyarite, Lakhmids or Ghassanids, although it benefited greatly from their art, administration and architecture.
Umayyad Era (661-750)
In 661 Caliphate turned to the hands of the Umayyad dynasty, Damascus was established as the Muslim capital. They were proud of their Arab ancestry and sponsored the poetry and culture of pre-Islamic Arabia. They established garrison towns at Ramla, ar-Raqqah, Basra, Kufa, Mosul and Samarra, all of which developed into major cities.
Caliph Abd al-Malik established Arabic as the Caliphate's official language in 686. This reform greatly influenced the conquered non-Arab peoples and fueled the Arabization of the region. However, the Arabs' higher status among non-Arab Muslim converts and the latter's obligation to pay heavy taxes caused resentment. Caliph Umar II strove to resolve the conflict when he came to power in 717. He rectified the situation, demanding that all Muslims be treated as equals, but his intended reforms did not take effect as he died after only three years of rule. By now, discontent with the Umayyads swept the region and an uprising occurred in which the Abbasids came to power and moved the capital to Baghdad.
Umayyads expanded their Empire westwards capturing North Africa from the Byzantines. Prior to the Arab conquest, North Africa was inhibited by various people including Punics, Vandals and Greeks. It was not until the 11th century that the Maghreb saw a large influx of ethnic Arabs. Starting with the 11th century, the Arab bedouin Banu Hilal tribes migrated to the West. Having been sent by the Fatimids to punish the Berber Zirids for abandoning Shiism, they travelled westwards. The Banu Hilal quickly defeated the Zirids and deeply weakened the neighboring Hammadids. Their influx was a major factor in the Arabization of the Maghreb, Although Berbers would rule the region until the 16th century (under such powerful dynasties as the Almoravids, the Almohads, Hafsids, etc.), the arrival of these tribes would eventually help to Arabize much of it ethnically in addition to the linguistic and political impact on the none-Arabs there. With the collapse of the Umayyad state in 1031 AD, Islamic Spain was divided into small kingdoms.
Abbassid Era (750-1513)
Abbasids let a revolt against the Umayyads and defeated them in the Battle of the Zab effectively ending their rule in all part of the Empire except Al-Andalus. The Abbasids descendants of Muhammad's uncle Abbas, but unlike the Umayyads, they had the support of non-Arab subjects of the Umayyads. where Umayyads treated non-Arabs in contempt. Abbasids ruled for 200 years before they lost their central control when Wilayas began to fracture, afterwards in the 1190s there was a revival for their power which was put to end by the Mongols who conquered Baghdad and killed the Caliph, members of the Abbasid royal family escaped the massacre and resorted to Cairo, which fractured from the Abbasid rule two years earlier, the Mamluk generals were taking the political side of the kingdom while Abbasid Caliphs were engaged in civil activities and continued patronizing science, arts and literature.
Arabs were ruled by Ottoman sultans from 1513 to 1918. Ottomans defeated the Mamluk Sultanate in Cairo, and ended the Abbasid Caliphate when they choose to bear the title of Caliph. Arabs did not feel the change of administration because Ottomans modeled their rule after the previous Arab administration systems. After World War I when the Ottoman Empire was overthrown by the British Empire, former Ottoman colonies were divided up between the British and French as Mandates.
Arabs in modern times live in the Arab world, which comprises 22 countries in the Middle East and North Africa. They are all modern states and became significant as distinct political entities after the fall and dissolution of the Ottoman Empire (1908–1918).
Arab Muslims are generally Sunni or Shia, one exception being the Ibadis, who predominate in Oman and can be found as small minorities in Algeria and Libya (mostly Berbers). Arab Christians generally follow Eastern Churches such as the Coptic Orthodox, Greek Orthodox and Greek Catholic churches and the Maronite church and others. In Iraq most Christians are Assyrians rather than Arabs, and follow the Assyrian Church of the East, Syriac Orthodox and Chaldean Church. The Greek Catholic churches and Maronite church are under the Pope of Rome, and a part of the larger worldwide Catholic Church. There are also Arab communities consisting of Druze and Baha'is.
Before the coming of Islam, most Arabs followed a pagan religion with a number of deities, including Hubal, Wadd, Allāt, Manat, and Uzza. A few individuals, the hanifs, had apparently rejected polytheism in favor of monotheism unaffiliated with any particular religion. Some tribes had converted to Christianity or Judaism. The most prominent Arab Christian kingdoms were the Ghassanid and Lakhmid kingdoms. When the Himyarite king converted to Judaism in the late 4th century, the elites of the other prominent Arab kingdom, the Kindites, being Himyirite vassals, apparently also converted (at least partly). With the expansion of Islam, polytheistic Arabs were rapidly Islamized, and polytheistic traditions gradually disappeared.
Today, Sunni Islam dominates in most areas, overwhelmingly so in North Africa. Shia Islam is dominant in southern Iraq and Lebanon. Substantial Shi'a populations exist in Saudi Arabia, Kuwait, northern Syria, the al-Batinah region in Oman, and in northern Yemen. The Druze community is concentrated in Lebanon, Israel and Syria. Many Druze claim independence from other major religions in the area and consider their religion more of a philosophy. Their books of worship are called Kitab Al Hikma (Epistles of Wisdom). They believe in reincarnation and pray to five messengers from God.
Christians make up 5.5% of the population of the Near East. In Lebanon they number about 39% of the population. In Syria, Christians make up 16% of the population. In British Palestine estimates ranged as high as 25%, but is now 3.8% due largely to the vast immigration of Jews into Israel following Israel's independence, and the 1948 Palestinian exodus. In West Bank and in Gaza, Arab Christians make up 8% and 0.8% of the populations, respectively. In Egypt, Christians number about 10% of the population. In Iraq, Christians constitute today up 3-4%, the number dropped from over 5% after Iraq war, a few of these are Arabs. In Israel, Arab Christians constitute 2.1% (roughly 9% of the Arab population). Arab Christians make up 8% of the population of Jordan. Most North and South American Arabs are Christian, as are about half of Arabs in Australia who come particularly from Lebanon, Syria, and the Palestinian territories. One well known member of this religious and ethnic community is Saint Abo, martyr and the patron saint of Tbilisi, Georgia.
Jews from Arab countries – mainly Mizrahi Jews and Yemenite Jews – are today usually not categorised as Arab. Sociologist Philip Mendes asserts that before the anti-Jewish actions of the 1930s and 1940s, overall Iraqi Jews "viewed themselves as Arabs of the Jewish faith, rather than as a separate race or nationality". Also, prior to the massive Sephardic emigrations to the Middle East in the 16th and 17th centuries, the Jewish communities of what are today Syria, Iraq, Israel, Lebanon, Egypt and Yemen were known by other Jewish communities as Musta'arabi Jews or "like Arabs". Prior to the emergence of the term Mizrahi, the term "Arab Jews" was sometimes used to describe Jews of the Arab world. The term is rarely used today. The few remaining Jews in the Arab countries reside mostly in Morocco and Tunisia. From the late 1940s to the early 1960s, following the creation of the state of Israel, most of these Jews fled their countries of birth and are now mostly concentrated in Israel. Some immigrated to France, where they formed a large Jewish community, that outnumbered Jews in the United States, but relatively small compared to European Jews. See Jewish exodus from Arab lands.
Dozens of large cities and hundreds of towns reflect pronounced urban character of the Arab world.
The Islamic Golden Age was inaugurated by the middle of the 8th century by the ascension of the Abbasid Caliphate and the transfer of the capital from Damascus to the newly founded city Baghdad. The Abbassids were influenced by the Qur'anic injunctions and hadith such as "The ink of the scholar is more holy than the blood of martyrs" stressing the value of knowledge. During this period the Muslim world became an intellectual centre for science, philosophy, medicine and education as the Abbasids championed the cause of knowledge and established the "House of Wisdom" (Arabic: بيت الحكمة) in Baghdad. Rival Muslim dynasties such as the Fatimids of Egypt and the Umayyads of al-Andalus were also major intellectual centres with cities such as Cairo and Córdoba rivaling Baghdad.
Arab culture is a term that draws together the common themes and overtones found in the Arab countries, especially those of the Middle-Eastern countries. This region's distinct religion, art, and food are some of the fundamental features that define Arab culture.
Arab Architecture has a deep diverse history, it dates to the dawn of the history in pre-Islamic Arabia. Each of it phases largely an extension of the earlier phase, it left also heavy impact on the architecture of other nations.
Arabic music is the music of Arab people or countries, especially those centered on the Arabian Peninsula. The world of Arab music has long been dominated by Cairo, a cultural center, though musical innovation and regional styles abound from Morocco to Saudi Arabia. Beirut has, in recent years, also become a major center of Arabic music. Classical Arab music is extremely popular across the population, especially a small number of superstars known throughout the Arab world. Regional styles of popular music include Algerian raï, Moroccan gnawa, Kuwaiti sawt, Egyptian el gil and Arabesque-pop music in Turkey.
Arabic literature spans for over two millennium, it has three phases, the pre-Islamic, Islamic and modern. Arabic literature had contributions by thousands of figures, many of them are not only poets but are celebrates in other fields such as politicians, scientists and scholars among others.
Price-Jones, David. The Closed Circle: an Interpretation of the Arabs. Pbk. ed., with a new preface by the author. Chicago: I. R. Dee, 2002. xiv, 464 p. - get this book pbk
▪ Premium designs
▪ Designs by country
▪ Designs by U.S. state
▪ Most popular designs
▪ Newest, last added designs
▪ Unique designs
▪ Cheap, budget designs
▪ Design super sale
DESIGNS BY THEME
▪ Accounting, audit designs
▪ Adult, sex designs
▪ African designs
▪ American, U.S. designs
▪ Animals, birds, pets designs
▪ Agricultural, farming designs
▪ Architecture, building designs
▪ Army, navy, military designs
▪ Audio & video designs
▪ Automobiles, car designs
▪ Books, e-book designs
▪ Beauty salon, SPA designs
▪ Black, dark designs
▪ Business, corporate designs
▪ Charity, donation designs
▪ Cinema, movie, film designs
▪ Computer, hardware designs
▪ Celebrity, star fan designs
▪ Children, family designs
▪ Christmas, New Year's designs
▪ Green, St. Patrick designs
▪ Dating, matchmaking designs
▪ Design studio, creative designs
▪ Educational, student designs
▪ Electronics designs
▪ Entertainment, fun designs
▪ Fashion, wear designs
▪ Finance, financial designs
▪ Fishing & hunting designs
▪ Flowers, floral shop designs
▪ Food, nutrition designs
▪ Football, soccer designs
▪ Gambling, casino designs
▪ Games, gaming designs
▪ Gifts, gift designs
▪ Halloween, carnival designs
▪ Hotel, resort designs
▪ Industry, industrial designs
▪ Insurance, insurer designs
▪ Interior, furniture designs
▪ International designs
▪ Internet technology designs
▪ Jewelry, jewellery designs
▪ Job & employment designs
▪ Landscaping, garden designs
▪ Law, juridical, legal designs
▪ Love, romantic designs
▪ Marketing designs
▪ Media, radio, TV designs
▪ Medicine, health care designs
▪ Mortgage, loan designs
▪ Music, musical designs
▪ Night club, dancing designs
▪ Photography, photo designs
▪ Personal, individual designs
▪ Politics, political designs
▪ Real estate, realty designs
▪ Religious, church designs
▪ Restaurant, cafe designs
▪ Retirement, pension designs
▪ Science, scientific designs
▪ Sea, ocean, river designs
▪ Security, protection designs
▪ Social, cultural designs
▪ Spirit, meditational designs
▪ Software designs
▪ Sports, sporting designs
▪ Telecommunication designs
▪ Travel, vacation designs
▪ Transport, logistic designs
▪ Web hosting designs
▪ Wedding, marriage designs
▪ White, light designs
▪ Magento store designs
▪ OpenCart store designs
▪ PrestaShop store designs
▪ CRE Loaded store designs
▪ Jigoshop store designs
▪ VirtueMart store designs
▪ osCommerce store designs
▪ Zen Cart store designs
▪ Flash CMS designs
▪ Joomla CMS designs
▪ Mambo CMS designs
▪ Drupal CMS designs
▪ WordPress blog designs
▪ Forum designs
▪ phpBB forum designs
▪ PHP-Nuke portal designs
ANIMATED WEBSITE DESIGNS
▪ Flash CMS designs
▪ Silverlight animated designs
▪ Silverlight intro designs
▪ Flash animated designs
▪ Flash intro designs
▪ XML Flash designs
▪ Flash 8 animated designs
▪ Dynamic Flash designs
▪ Flash animated photo albums
▪ Dynamic Swish designs
▪ Swish animated designs
▪ jQuery animated designs
▪ WebMatrix Razor designs
▪ HTML 5 designs
▪ Web 2.0 designs
▪ 3-color variation designs
▪ 3D, three-dimensional designs
▪ Artwork, illustrated designs
▪ Clean, simple designs
▪ CSS based website designs
▪ Full design packages
▪ Full ready websites
▪ Portal designs
▪ Stretched, full screen designs
▪ Universal, neutral designs
CORPORATE ID DESIGNS
▪ Corporate identity sets
▪ Logo layouts, logo designs
▪ Logotype sets, logo packs
▪ PowerPoint, PTT designs
▪ Facebook themes
VIDEO, SOUND & MUSIC
▪ Video e-cards
▪ After Effects video intros
▪ Special video effects
▪ Music tracks, music loops
▪ Stock music bank
GRAPHICS & CLIPART
▪ Pro clipart & illustrations, $19/year
▪ 5,000+ icons by subscription
▪ Icons, pictograms
|Custom Logo Design $149 ▪ Web Programming ▪ ID Card Printing ▪ Best Web Hosting ▪ eCommerce Software ▪ Add Your Link|
|© 1996-2013 MAGIA Internet Studio ▪ About ▪ Portfolio ▪ Photo on Demand ▪ Hosting ▪ Advertise ▪ Sitemap ▪ Privacy ▪ Maria Online| | 1 | 6 |
<urn:uuid:00187802-2dad-4770-8921-b06f9d694012> | John F. Kennedy Stadium
|John F. Kennedy Stadium|
|Former names||Sesquicentennial Stadium (1926)
Philadelphia Municipal Stadium (1926-1964)
John F. Kennedy Stadium (1964-1992)
|Location||S Broad Street, Philadelphia, Pennsylvania 19148|
|Opened||April 15, 1926|
|Closed||July 13, 1989|
|Demolished||September 19-September 24,1992|
|Owner||City of Philadelphia|
|Architect||Simon & Simon|
|Capacity||102,000 (for American football)|
|Philadelphia Quakers (AFL) (1926)
Philadelphia Eagles (NFL) (1936-1939, 1941)
Liberty Bowl (NCAA) (1959-1963)
Army–Navy Game (NCAA) (1936-1979)
Philadelphia Bell (WFL) (1974)
John F. Kennedy Stadium (formally Philadelphia Municipal Stadium) was an open-air stadium in Philadelphia, Pennsylvania that stood from 1926 to 1992. The South Philadelphia stadium was situated on the east side of the far southern end of Broad Street at a location that is now part of the South Philadelphia Sports Complex. Designed by the architectural firm of Simon & Simon in a classic 1920s style with a horseshoe seating design that surrounded a track and football field, at its peak the facility seated in excess of 102,000 people. Bleachers were later added at the open (North) end. Each section of the main portion of the stadium contained its own entrance, which displayed the letters of each section above the entrance, in a nod to ancient Roman stadia. Section designators were divided at the south end of the stadium (the bottom of the "U" shape) between West and East, starting with Sections WA and EA and proceeding north. The north bleachers started with Section NA.
The field was 110 feet (34 m) wide and 307 feet (94 m) long. It was built of concrete, stone, and brick on a 13.5-acre (55,000 m2) tract.
Opening and names
JFK Stadium was built as part of the 1926 Sesquicentennial International Exposition. Originally known as Sesquicentennial Stadium when it opened April 15, 1926, the structure was renamed "Philadelphia Municipal Stadium" after the Exposition's closing ceremonies. In 1964, it was renamed John F. Kennedy Stadium in memory of the 35th President of the United States who had been assassinated the year before.
The stadium's first tenants (in 1926) were the Philadelphia Quakers of the first American Football League, whose Saturday afternoon home games were a popular mainstay of the Exposition. The Quakers won the league championship but the league folded after one year.
The Frankford Yellow Jackets also played here intermittently until the team's demise in 1931. Two years later the National Football League awarded another team to the city, the Philadelphia Eagles. The Eagles had a four-season stint as tenants of the stadium before moving to Shibe Park for the 1940 season, although the team did play at Municipal in 1941. The Eagles also used the stadium for practices in the 1970s and 1980s, even locating their first practice bubble there before moving it to the Veterans Stadium parking lot following the stadium's condemnation.
The stadium became known chiefly as the "neutral" venue for a total of 41 annual Army–Navy Games played there between 1936 to 1979, and during the 1960s it served as Navy's home field when they played Notre Dame.
A.F. “Bud” Dudley, a former Villanova University athletic-director, created the Liberty Bowl in Philadelphia in 1959. The game was played at Municipal Stadium and was the only cold-weather bowl game of its time. It was plagued by poor attendance; the 1963 game between Mississippi State and NC State drew less than 10,000 fans and absorbed a loss in excess of $40,000. The Liberty Bowl’s best game was its first in 1959, when 38,000 fans watched Penn State beat Alabama, 7-0. Atlantic City convinced Dudley to move his game from Philadelphia to Atlantic City's Convention Hall for 1964. 6,059 fans saw Utah rout West Virginia in the first Bowl Game played indoors. Dudley moved the game to Memphis in 1965 where it has been played since.
The stadium hosted Philadelphia's City Title high school football championship game in 1939 and 1978. St. Joe's Prep defeated Northeast, 27 to 6, in 1939. Frankford beat Archbishop Wood, 27 to 7, in heavy rain in 1978.
The stadium was home to the Philadelphia Bell of the World Football League in 1974; the team played at Franklin Field in 1975. In 1958 the stadium hosted a CFL game between the Hamilton Tiger-Cats and the Ottawa Rough Riders with proceeds from ticket sales going to local charities.
Other sports
On September 23, 1926, an announced crowd of 120,557 packed the then-new Stadium during a rainstorm to witness Gene Tunney capture the world heavyweight boxing title from Jack Dempsey. Undefeated Rocky Marciano knocked out Jersey Joe Walcott at the stadium in 1952 to win boxing's heavyweight championship.
JFK Stadium hosted Team America's soccer match against England on May 31, 1976, as part of the 1976 U.S.A. Bicentennial Cup Tournament. In the game, England defeated Team America, 3-1, in front of a small crowd of 16,239. England and Italy had failed to qualify for the 1976 European Championship final tournament and so they joined Brazil and Team America, composed of international stars playing in the North American Soccer League, in the four team competition. Because Team America was composed of international players and was not the American national team, the Football Association does not regard England's match against Team America as an official international match.
JFK Stadium was one of fifteen United States stadia (and along with Franklin Field one of two in Philadelphia) inspected by a five-member FIFA committee in April 1988 in the evaluation of the United States as a possible host of the 1994 FIFA World Cup. By the time the World Cup was held in 1994, JFK Stadium had already been demolished two years prior.
Other events
The Philadelphia Flyers won their second Stanley Cup on May 27, 1975, and celebrated with a parade down Broad Street the next day that ended at the stadium. Five years later, the Philadelphia Phillies won their first World Series on October 21 of that year. The following day, the team paraded the exact route. In 1981, The Rolling Stones announced their World Tour via a press conference at JFK. Through 1989, the Broad Street Run course ended with a lap around the track at the stadium.
JFK Stadium occasionally hosted rock concerts, including the American portion of Live Aid on July 13, 1985.
The Supremes played at the stadium on September 10, 1965.
The Beatles played at the stadium on August 16, 1966.
Judy Garland gave her last concert in America here in 1968.
Led Zeppelin was scheduled to conclude their 1977 US Tour at the stadium, but the final 7 concerts of the tour were cancelled, due to the death of Robert Plant's 5 year old son Karac. The original Led Zeppelin never played in the US again, although the surviving members performed at Live Aid.
On June 17, 1978, The Rolling Stones performed before a crowd of 100,000 fans. Opening acts included Bob Marley's former bandmate Peter Tosh and Foreigner. After The Stones finished their set, rowdy concert goers began throwing anything they could get, onto the stage that was shaped into The Rolling Stones "tongue" logo. Damage to the stage was estimated at a million dollars as smoke came pouring out marring an otherwise great day of vintage Rolling Stones. They also began their 6th U.S. tour on September 25, 1981.
Fleetwood Mac played at JFK in July 1978. The Steve Miller Band, Bob Welch and Sanford Townsend Band were the opening acts
Blondie concluded their Tracks Across America Tour here, on August 21, 1982. They disbanded shortly thereafter, due to guitarist Chris Stein being diagnosed with a rare life-threatening disease, pemphigus and The Hunter having sold very poorly. They did not perform live again for 15 years, until 1997. Genesis was the headliner and used the open air stadium for one of their spectacular nighttime laser and fireworks shows. The show started at 3pm and also featured Elvis Costello and the Attractions, A Flock of Seagulls and Robert Hazard and the Heroes.
The Who performed at the stadium on September 25, 1982, early into their (then) Farewell Tour which also supported their album It's Hard. Opening acts for the show were Santana and The Clash. A total of 91,451 were in attendance, one of the largest ticketed single-show, non-festival stadium concerts ever held in the U.S., as documented by Billboard.
Journey headlined a concert June 4, 1983. The show featured Bryan Adams, The Tubes, Sammy Hagar and John Cougar (as John Mellencamp was referred to at the time). This show provided the majority of the concert footage for an NFL Films produced documentary called "Journey, Frontiers and Beyond".
The Live Aid concert. Live Aid was primarily a dual-venue concert held on 13 July 1985. The event was held simultaneously at Wembley Stadium in London, England, United Kingdom (attended by 72,000 people) and John F. Kennedy Stadium in Philadelphia, Pennsylvania, United States (attended by about 100,000 people), (as well as other venues in other countries). Musical acts that appeared in Philadelphia included Madonna, the former members of Led Zeppelin, a Crosby, Stills, Nash and Young reunion, Mick Jagger and Tina Turner, and Bob Dylan accompanied by Keith Richards and Ronnie Woods.
Pink Floyd held a concert there on September 19, 1987, in front of a crowd in excess of 120,000 (general admission was sold on the field), but the show was not sold out.
U2 played here on September 25, 1987 with Bruce Springsteen joining them on stage.
The stadium also played host to Amnesty International's Human Rights Now! Benefit Concert on September 19, 1988. The show was headlined by Sting and Peter Gabriel and also featured Bruce Springsteen & The E Street Band, Tracy Chapman, Youssou N'Dour and Joan Baez.
It was not known at the time, but the stadium's last event was The Grateful Dead's concert on July 7, 1989, with Bruce Hornsby & The Range as their opening act. Fans at the show recall concrete crumbling and bathrooms in poor shape. The Dead closed the show with "Knockin' on Heaven's Door"; it would be the last song played at the stadium. In 2010, the concert recording was released on a CD/DVD combination, titled Crimson White & Indigo.
Closing and demolition
Six days after the Grateful Dead's 1989 show, then-Mayor Wilson Goode condemned JFK stadium, with multiple findings by city inspectors that the structure was unsafe due to fire hazards and crumbling concrete. The stadium was demolished on September 23, 1992.
The 1993 Philadelphia stop for the Lollapalooza music festival was held at the JFK Stadium site on July 18, 1993. The site was an open field as construction had not yet begun on the then still tentatively named "Spectrum II" (Wells Fargo Center). This was the show at which Rage Against the Machine did not play, in protest of the Parents Music Resource Center.
- * City Architect; Department of City Architecture; Philadelphia Information Locator System
- "JFK Stadium: End Zone Near". Philadelphia Inquirer. 1992-02-05. p. B2.
- E.L Austin and Odell Hauser. The Sesqui-Centennial International Exposition (Chapter XXX "MUNICIPAL STADIUM") pp 419-423; Philadelphia, PA (1929).
- Antonick, John (2005-06-22). "Unique Game". West Virginia Mountaineers (MSNsportsNET.com). Retrieved 2009-04-26.
- "FB City Title Recaps". tedsillary.com. Ted Sillary. Retrieved 2009-04-23.
- "1957 NASCAR convertible race". Racing-Reference. Retrieved 28 March 2012.
- "England 's Minor Tournaments and Cups; U.S.A. Bicentennial Cup Tournament, U.S.A., 1976". England Football Online. Peter Young, Alan Brook, Josh Benn, Chris Goodwin, and Glen Isherwood. Retrieved 2009-04-24.
- Vecsey, George (1988-04-10). "Sports of The Time; Americans Prepare for Lights, Cameras and Soccer". New York Times. Retrieved 2009-04-24.
- JFK STADIUM ROLLING STONES PRESS CONFERENCE
- "Led Zeppelin". Page 20 All Shows. Retrieved 2009-07-26.
- Rockwell, Joan (1977-06-13). "Frampton Back, Plays to 91,000; Philadelphia Show Is First Concert in 7 Months Million-Dollar Gross". New York Times. p. 36. Retrieved 2009-07-10.
- "American Tour 1981". Rocks Off Setlists. Retrieved 2009-04-28.
- "The best-attended US tours of allptime". Vanity Edge. Retrieved 3 August 2011.
- "John F. Kennedy Stadium; July 07, 1989; Philadelphia, PA US". Dead.net. Retrieved 2009-04-29.
- "City Closes JFK Stadium". Philadelphia Inquirer. 1989-07-14.
- "Goodbye To JFK Stadium As Demolition Firm Is Hired". Philadelphia Inquirer. 1992-03-10.
- "Wreckers, 1, JFK Stadium, 0". Philadelphia Inquirer. 1992-04-21.
- "Lollapalooza 1993 - John F. Kennedy Stadium, Philadelphia, PA". Jane's Addiction.org. 2007-02-18. Retrieved 2008-09-17.[dead link]
Further reading
- Grateful Dead's July 7, 1989 JFK Concert
- Site of JFK/Municipal Stadium via Google Maps
- Aerial photograph of JFK/Municipal Stadium in 1927
|Home of the
1936 – 1939
|Home of the
1959 – 1963
Atlantic City Convention Hall | 1 | 5 |
<urn:uuid:02a6a7bf-414a-4250-bdae-86998001a518> | A typical axial-lead resistor
|Working principle||Electric resistance|
where I is the current through the conductor in units of amperes, V is the potential difference measured across the conductor in units of volts, and R is the resistance of the conductor in units of ohms.
The ratio of the voltage applied across a resistor's terminals to the intensity of current in the circuit is called its resistance, and this can be assumed to be a constant (independent of the voltage) for ordinary resistors working within their ratings.
Resistors are common elements of electrical networks and electronic circuits and are ubiquitous in electronic equipment. Practical resistors can be made of various compounds and films, as well as resistance wire (wire made of a high-resistivity alloy, such as nickel-chrome). Resistors are also implemented within integrated circuits, particularly analog devices, and can also be integrated into hybrid and printed circuits.
The electrical functionality of a resistor is specified by its resistance: common commercial resistors are manufactured over a range of more than nine orders of magnitude. When specifying that resistance in an electronic design, the required precision of the resistance may require attention to the manufacturing tolerance of the chosen resistor, according to its specific application. The temperature coefficient of the resistance may also be of concern in some precision applications. Practical resistors are also specified as having a maximum power rating which must exceed the anticipated power dissipation of that resistor in a particular circuit: this is mainly of concern in power electronics applications. Resistors with higher power ratings are physically larger and may require heat sinks. In a high-voltage circuit, attention must sometimes be paid to the rated maximum working voltage of the resistor.
Practical resistors have a series inductance and a small parallel capacitance; these specifications can be important in high-frequency applications. In a low-noise amplifier or pre-amp, the noise characteristics of a resistor may be an issue. The unwanted inductance, excess noise, and temperature coefficient are mainly dependent on the technology used in manufacturing the resistor. They are not normally specified individually for a particular family of resistors manufactured using a particular technology. A family of discrete resistors is also characterized according to its form factor, that is, the size of the device and the position of its leads (or terminals) which is relevant in the practical manufacturing of circuits using them.
The ohm (symbol: Ω) is the SI unit of electrical resistance, named after Georg Simon Ohm. An ohm is equivalent to a volt per ampere. Since resistors are specified and manufactured over a very large range of values, the derived units of milliohm (1 mΩ = 10−3 Ω), kilohm (1 kΩ = 103 Ω), and megohm (1 MΩ = 106 Ω) are also in common usage.
The reciprocal of resistance R is called conductance G = 1/R and is measured in siemens (SI unit), sometimes referred to as a mho. Hence, siemens is the reciprocal of an ohm: . Although the concept of conductance is often used in circuit analysis, practical resistors are always specified in terms of their resistance (ohms) rather than conductance.
Electronic symbols and notation
The symbol used for a resistor in a circuit diagram varies from standard to standard and country to country. Two typical symbols are as follows;
IEC-style resistor symbol
The notation to state a resistor's value in a circuit diagram varies, too. The European notation avoids using a decimal separator, and replaces the decimal separator with the SI prefix symbol for the particular value. For example, 8k2 in a circuit diagram indicates a resistor value of 8.2 kΩ. Additional zeros imply tighter tolerance, for example 15M0. When the value can be expressed without the need for an SI prefix, an 'R' is used instead of the decimal separator. For example, 1R2 indicates 1.2 Ω, and 18R indicates 18 Ω. The use of a SI prefix symbol or the letter 'R' circumvents the problem that decimal separators tend to 'disappear' when photocopying a printed circuit diagram.
Theory of operation
Ohm's law
The behavior of an ideal resistor is dictated by the relationship specified by Ohm's law:
Ohm's law states that the voltage (V) across a resistor is proportional to the current (I), where the constant of proportionality is the resistance (R).
Equivalently, Ohm's law can be stated:
This formulation states that the current (I) is proportional to the voltage (V) and inversely proportional to the resistance (R). This is directly used in practical computations. For example, if a 300 ohm resistor is attached across the terminals of a 12 volt battery, then a current of 12 / 300 = 0.04 amperes (or 40 milliamperes) flows through that resistor.
Series and parallel resistors
In a series configuration, the current through all of the resistors is the same, but the voltage across each resistor will be in proportion to its resistance. The potential difference (voltage) seen across the network is the sum of those voltages, thus the total resistance can be found as the sum of those resistances:
As a special case, the resistance of N resistors connected in series, each of the same resistance R, is given by NR.
Resistors in a parallel configuration are each subject to the same potential difference (voltage), however the currents through them add. The conductances of the resistors then add to determine the conductance of the network. Thus the equivalent resistance (Req) of the network can be computed:
The parallel equivalent resistance can be represented in equations by two vertical lines "||" (as in geometry) as a simplified notation. Occasionally two slashes "//" are used instead of "||", in case the keyboard or font lacks the vertical line symbol. For the case of two resistors in parallel, this can be calculated using:
As a special case, the resistance of N resistors connected in parallel, each of the same resistance R, is given by R/N.
A resistor network that is a combination of parallel and series connections can be broken up into smaller parts that are either one or the other. For instance,
However, some complex networks of resistors cannot be resolved in this manner, requiring more sophisticated circuit analysis. For instance, consider a cube, each edge of which has been replaced by a resistor. What then is the resistance that would be measured between two opposite vertices? In the case of 12 equivalent resistors, it can be shown that the corner-to-corner resistance is 5⁄6 of the individual resistance. More generally, the Y-Δ transform, or matrix methods can be used to solve such a problem.
One practical application of these relationships is that a non-standard value of resistance can generally be synthesized by connecting a number of standard values in series or parallel. This can also be used to obtain a resistance with a higher power rating than that of the individual resistors used. In the special case of N identical resistors all connected in series or all connected in parallel, the power rating of the individual resistors is thereby multiplied by N.
Power dissipation
The power P dissipated by a resistor is calculated as:
The first form is a restatement of Joule's first law. Using Ohm's law, the two other forms can be derived.
The total amount of heat energy released over a period of time can be determined from the integral of the power over that period of time:
Resistors are rated according to their maximum power dissipation. Most discrete resistors in solid-state electronic systems absorb much less than a watt of electrical power and require no attention to their power rating. Such resistors in their discrete form, including most of the packages detailed below, are typically rated as 1/10, 1/8, or 1/4 watt.
Resistors required to dissipate substantial amounts of power, particularly used in power supplies, power conversion circuits, and power amplifiers, are generally referred to as power resistors; this designation is loosely applied to resistors with power ratings of 1 watt or greater. Power resistors are physically larger and may not use the preferred values, color codes, and external packages described below.
If the average power dissipated by a resistor is more than its power rating, damage to the resistor may occur, permanently altering its resistance; this is distinct from the reversible change in resistance due to its temperature coefficient when it warms. Excessive power dissipation may raise the temperature of the resistor to a point where it can burn the circuit board or adjacent components, or even cause a fire. There are flameproof resistors that fail (open circuit) before they overheat dangerously.
Since poor air circulation, high altitude, or high operating temperatures may occur, resistors may be specified with higher rated dissipation than will be experienced in service.
Some types and ratings of resistors may also have a maximum voltage rating; this may limit available power dissipation for higher resistance values.
Lead arrangements
Through-hole components typically have leads leaving the body axially. Others have leads coming off their body radially instead of parallel to the resistor axis. Other components may be SMT (surface mount technology) while high power resistors may have one of their leads designed into the heat sink.
Carbon composition
Carbon composition resistors consist of a solid cylindrical resistive element with embedded wire leads or metal end caps to which the lead wires are attached. The body of the resistor is protected with paint or plastic. Early 20th-century carbon composition resistors had uninsulated bodies; the lead wires were wrapped around the ends of the resistance element rod and soldered. The completed resistor was painted for color-coding of its value.
The resistive element is made from a mixture of finely ground (powdered) carbon and an insulating material (usually ceramic). A resin holds the mixture together. The resistance is determined by the ratio of the fill material (the powdered ceramic) to the carbon. Higher concentrations of carbon, a good conductor, result in lower resistance. Carbon composition resistors were commonly used in the 1960s and earlier, but are not so popular for general use now as other types have better specifications, such as tolerance, voltage dependence, and stress (carbon composition resistors will change value when stressed with over-voltages). Moreover, if internal moisture content (from exposure for some length of time to a humid environment) is significant, soldering heat will create a non-reversible change in resistance value. Carbon composition resistors have poor stability with time and were consequently factory sorted to, at best, only 5% tolerance. These resistors, however, if never subjected to overvoltage nor overheating were remarkably reliable considering the component's size
Carbon composition resistors are still available, but comparatively quite costly. Values ranged from fractions of an ohm to 22 megohms. Due to their high price, these resistors are no longer used in most applications. However, they are used in power supplies and welding controls.
Carbon pile
A carbon pile resistor is made of a stack of carbon disks compressed between two metal contact plates. Adjusting the clamping pressure changes the resistance between the plates. These resistors are used when an adjustable load is required, for example in testing automotive batteries or radio transmitters. A carbon pile resistor can also be used as a speed control for small motors in household appliances (sewing machines, hand-held mixers) with ratings up to a few hundred watts. A carbon pile resistor can be incorporated in automatic voltage regulators for generators, where the carbon pile controls the field current to maintain relatively constant voltage. The principle is also applied in the carbon microphone.
Carbon film
A carbon film is deposited on an insulating substrate, and a helix is cut in it to create a long, narrow resistive path. Varying shapes, coupled with the resistivity of amorphous carbon (ranging from 500 to 800 μΩ m), can provide a variety of resistances. Compared to carbon composition they feature low noise, because of the precise distribution of the pure graphite without binding. Carbon film resistors feature a power rating range of 0.125 W to 5 W at 70 °C. Resistances available range from 1 ohm to 10 megohm. The carbon film resistor has an operating temperature range of −55 °C to 155 °C. It has 200 to 600 volts maximum working voltage range. Special carbon film resistors are used in applications requiring high pulse stability.
Printed carbon resistor
Carbon composition resistors can be printed directly onto printed circuit board (PCB) substrates as part of the PCB manufacturing process. Whilst this technique is more common on hybrid PCB modules, it can also be used on standard fibreglass PCBs. Tolerances are typically quite large, and can be in the order of 30%. A typical application would be non-critical pull-up resistors.
Thick and thin film
Thick film resistors became popular during the 1970s, and most SMD (surface mount device) resistors today are of this type. The resistive element of thick films is 1000 times thicker than thin films, but the principal difference is how the film is applied to the cylinder (axial resistors) or the surface (SMD resistors).
Thin film resistors are made by sputtering (a method of vacuum deposition) the resistive material onto an insulating substrate. The film is then etched in a similar manner to the old (subtractive) process for making printed circuit boards; that is, the surface is coated with a photo-sensitive material, then covered by a pattern film, irradiated with ultraviolet light, and then the exposed photo-sensitive coating is developed, and underlying thin film is etched away.
Thick film resistors are manufactured using screen and stencil printing processes.
Because the time during which the sputtering is performed can be controlled, the thickness of the thin film can be accurately controlled. The type of material is also usually different consisting of one or more ceramic (cermet) conductors such as tantalum nitride (TaN), ruthenium oxide (RuO2), lead oxide (PbO), bismuth ruthenate (Bi2Ru2O7), nickel chromium (NiCr), or bismuth iridate (Bi2Ir2O7).
The resistance of both thin and thick film resistors after manufacture is not highly accurate; they are usually trimmed to an accurate value by abrasive or laser trimming. Thin film resistors are usually specified with tolerances of 0.1, 0.2, 0.5, or 1%, and with temperature coefficients of 5 to 25 ppm/K. They also have much lower noise levels, on the level of 10–100 times less than thick film resistors.
Thick film resistors may use the same conductive ceramics, but they are mixed with sintered (powdered) glass and a carrier liquid so that the composite can be screen-printed. This composite of glass and conductive ceramic (cermet) material is then fused (baked) in an oven at about 850 °C.
Thick film resistors, when first manufactured, had tolerances of 5%, but standard tolerances have improved to 2% or 1% in the last few decades. Temperature coefficients of thick film resistors are high, typically ±200 or ±250 ppm/K; a 40 kelvin (70 °F) temperature change can change the resistance by 1%.
Thin film resistors are usually far more expensive than thick film resistors. For example, SMD thin film resistors, with 0.5% tolerances, and with 25 ppm/K temperature coefficients, when bought in full size reel quantities, are about twice the cost of 1%, 250 ppm/K thick film resistors.
Metal film
A common type of axial resistor today is referred to as a metal-film resistor. Metal electrode leadless face (MELF) resistors often use the same technology, but are a cylindrically shaped resistor designed for surface mounting. Note that other types of resistors (e.g., carbon composition) are also available in MELF packages.
Metal film resistors are usually coated with nickel chromium (NiCr), but might be coated with any of the cermet materials listed above for thin film resistors. Unlike thin film resistors, the material may be applied using different techniques than sputtering (though that is one such technique). Also, unlike thin-film resistors, the resistance value is determined by cutting a helix through the coating rather than by etching. (This is similar to the way carbon resistors are made.) The result is a reasonable tolerance (0.5%, 1%, or 2%) and a temperature coefficient that is generally between 50 and 100 ppm/K. Metal film resistors possess good noise characteristics and low non-linearity due to a low voltage coefficient. Also beneficial are the components efficient tolerance, temperature coefficient and stability.
Metal oxide film
Metal-oxide film resistors are made of metal oxides such as tin oxide. This results in a higher operating temperature and greater stability/reliability than Metal film. They are used in applications with high endurance demands.
Wirewound resistors are commonly made by winding a metal wire, usually nichrome, around a ceramic, plastic, or fiberglass core. The ends of the wire are soldered or welded to two caps or rings, attached to the ends of the core. The assembly is protected with a layer of paint, molded plastic, or an enamel coating baked at high temperature. Because of the very high surface temperature these resistors can withstand temperatures of up to +450 °C. Wire leads in low power wirewound resistors are usually between 0.6 and 0.8 mm in diameter and tinned for ease of soldering. For higher power wirewound resistors, either a ceramic outer case or an aluminum outer case on top of an insulating layer is used. The aluminum-cased types are designed to be attached to a heat sink to dissipate the heat; the rated power is dependent on being used with a suitable heat sink, e.g., a 50 W power rated resistor will overheat at a fraction of the power dissipation if not used with a heat sink. Large wirewound resistors may be rated for 1,000 watts or more.
Because wirewound resistors are coils they have more undesirable inductance than other types of resistor, although winding the wire in sections with alternately reversed direction can minimize inductance. Other techniques employ bifilar winding, or a flat thin former (to reduce cross-section area of the coil). For the most demanding circuits, resistors with Ayrton-Perry winding are used.
Applications of wirewound resistors are similar to those of composition resistors with the exception of the high frequency. The high frequency response of wirewound resistors is substantially worse than that of a composition resistor.
Foil resistor
The primary resistance element of a foil resistor is a special alloy foil several micrometres thick. Since their introduction in the 1960s, foil resistors have had the best precision and stability of any resistor available. One of the important parameters influencing stability is the temperature coefficient of resistance (TCR). The TCR of foil resistors is extremely low, and has been further improved over the years. One range of ultra-precision foil resistors offers a TCR of 0.14 ppm/°C, tolerance ±0.005%, long-term stability (1 year) 25 ppm, (3 year) 50 ppm (further improved 5-fold by hermetic sealing), stability under load (2000 hours) 0.03%, thermal EMF 0.1 μV/°C, noise −42 dB, voltage coefficient 0.1 ppm/V, inductance 0.08 μH, capacitance 0.5 pF.
Ammeter shunts
An ammeter shunt is a special type of current-sensing resistor, having four terminals and a value in milliohms or even micro-ohms. Current-measuring instruments, by themselves, can usually accept only limited currents. To measure high currents, the current passes through the shunt, where the voltage drop is measured and interpreted as current. A typical shunt consists of two solid metal blocks, sometimes brass, mounted on to an insulating base. Between the blocks, and soldered or brazed to them, are one or more strips of low temperature coefficient of resistance (TCR) manganin alloy. Large bolts threaded into the blocks make the current connections, while much smaller screws provide voltage connections. Shunts are rated by full-scale current, and often have a voltage drop of 50 mV at rated current. Such meters are adapted to the shunt full current rating by using an appropriately marked dial face; no change need be made to the other parts of the meter.
Grid resistor
In heavy-duty industrial high-current applications, a grid resistor is a large convection-cooled lattice of stamped metal alloy strips connected in rows between two electrodes. Such industrial grade resistors can be as large as a refrigerator; some designs can handle over 500 amperes of current, with a range of resistances extending lower than 0.04 ohms. They are used in applications such as dynamic braking and load banking for locomotives and trams, neutral grounding for industrial AC distribution, control loads for cranes and heavy equipment, load testing of generators and harmonic filtering for electric substations.
Special varieties
Variable resistors
Adjustable resistors
A resistor may have one or more fixed tapping points so that the resistance can be changed by moving the connecting wires to different terminals. Some wirewound power resistors have a tapping point that can slide along the resistance element, allowing a larger or smaller part of the resistance to be used.
Where continuous adjustment of the resistance value during operation of equipment is required, the sliding resistance tap can be connected to a knob accessible to an operator. Such a device is called a rheostat and has two terminals.
A common element in electronic devices is a three-terminal resistor with a continuously adjustable tapping point controlled by rotation of a shaft or knob. These variable resistors are known as potentiometers when all three terminals are present, since they act as a continuously adjustable voltage divider. A common example is a volume control for a radio receiver.
Accurate, high-resolution panel-mounted potentiometers (or "pots") have resistance elements typically wirewound on a helical mandrel, although some include a conductive-plastic resistance coating over the wire to improve resolution. These typically offer ten turns of their shafts to cover their full range. They are usually set with dials that include a simple turns counter and a graduated dial. Electronic analog computers used them in quantity for setting coefficients, and delayed-sweep oscilloscopes of recent decades included one on their panels.
Resistance decade boxes
A resistance decade box or resistor substitution box is a unit containing resistors of many values, with one or more mechanical switches which allow any one of various discrete resistances offered by the box to be dialed in. Usually the resistance is accurate to high precision, ranging from laboratory/calibration grade accuracy of 20 parts per million, to field grade at 1%. Inexpensive boxes with lesser accuracy are also available. All types offer a convenient way of selecting and quickly changing a resistance in laboratory, experimental and development work without needing to attach resistors one by one, or even stock each value. The range of resistance provided, the maximum resolution, and the accuracy characterize the box. For example, one box offers resistances from 0 to 24 megohms, maximum resolution 0.1 ohm, accuracy 0.1%.
Special devices
There are various devices whose resistance changes with various quantities. The resistance of NTC thermistors exhibit a strong negative temperature coefficient, making them useful for measuring temperatures. Since their resistance can be large until they are allowed to heat up due to the passage of current, they are also commonly used to prevent excessive current surges when equipment is powered on. Similarly, the resistance of a humistor varies with humidity. Metal oxide varistors drop to a very low resistance when a high voltage is applied, making them useful for protecting electronic equipment by absorbing dangerous voltage surges. One sort of photodetector, the photoresistor, has a resistance which varies with illumination.
The strain gauge, invented by Edward E. Simmons and Arthur C. Ruge in 1938, is a type of resistor that changes value with applied strain. A single resistor may be used, or a pair (half bridge), or four resistors connected in a Wheatstone bridge configuration. The strain resistor is bonded with adhesive to an object that will be subjected to mechanical strain. With the strain gauge and a filter, amplifier, and analog/digital converter, the strain on an object can be measured.
A related but more recent invention uses a Quantum Tunnelling Composite to sense mechanical stress. It passes a current whose magnitude can vary by a factor of 1012 in response to changes in applied pressure.
The value of a resistor can be measured with an ohmmeter, which may be one function of a multimeter. Usually, probes on the ends of test leads connect to the resistor. A simple ohmmeter may apply a voltage from a battery across the unknown resistor (with an internal resistor of a known value in series) producing a current which drives a meter movement. The current, in accordance with Ohm's Law, is inversely proportional to the sum of the internal resistance and the resistor being tested, resulting in an analog meter scale which is very non-linear, calibrated from infinity to 0 ohms. A digital multimeter, using active electronics, may instead pass a specified current through the test resistance. The voltage generated across the test resistance in that case is linearly proportional to its resistance, which is measured and displayed. In either case the low-resistance ranges of the meter pass much more current through the test leads than do high-resistance ranges, in order for the voltages present to be at reasonable levels (generally below 10 volts) but still measurable.
Measuring low-value resistors, such as fractional-ohm resistors, with acceptable accuracy requires four-terminal connections. One pair of terminals applies a known, calibrated current to the resistor, while the other pair senses the voltage drop across the resistor. Some laboratory quality ohmmeters, especially milliohmmeters, and even some of the better digital multimeters sense using four input terminals for this purpose, which may be used with special test leads. Each of the two so-called Kelvin clips has a pair of jaws insulated from each other. One side of each clip applies the measuring current, while the other connections are only to sense the voltage drop. The resistance is again calculated using Ohm's Law as the measured voltage divided by the applied current.
Production resistors
Resistor characteristics are quantified and reported using various national standards. In the US, MIL-STD-202 contains the relevant test methods to which other standards refer.
There are various standards specifying properties of resistors for use in equipment:
- BS 1852
- MIL-PRF-39007 (Fixed Power, established reliability)
- MIL-PRF-55342 (Surface-mount thick and thin film)
- MIL-R-11 STANDARD CANCELED
- MIL-R-39017 (Fixed, General Purpose, Established Reliability)
- MIL-PRF-32159 (zero ohm jumpers)
There are other United States military procurement MIL-R- standards.
Resistance standards
The primary standard for resistance, the "mercury ohm" was initially defined in 1884 in as a column of mercury 106.3 cm long and 1 square millimeter in cross-section, at 0 degrees Celsius. Difficulties in precisely measuring the physical constants to replicate this standard result in variations of as much as 30 ppm. From 1900 the mercury ohm was replaced with a precision machined plate of manganin. Since 1990 the international resistance standard has been based on the quantized Hall effect discovered by Klaus von Klitzing, for which he won the Nobel Prize in Physics in 1985.
Resistors of extremely high precision are manufactured for calibration and laboratory use. They may have four terminals, using one pair to carry an operating current and the other pair to measure the voltage drop; this eliminates errors caused by voltage drops across the lead resistances, because no charge flows through voltage sensing leads. It is important in small value resistors (100–0.0001 ohm) where lead resistance is significant or even comparable with respect to resistance standard value.
Resistor marking
Most axial resistors use a pattern of colored stripes to indicate resistance. Surface-mount resistors are marked numerically, if they are big enough to permit marking; more-recent small sizes are impractical to mark. Cases are usually tan, brown, blue, or green, though other colors are occasionally found such as dark red or dark gray.
Early 20th century resistors, essentially uninsulated, were dipped in paint to cover their entire body for color-coding. A second color of paint was applied to one end of the element, and a color dot (or band) in the middle provided the third digit. The rule was "body, tip, dot", providing two significant digits for value and the decimal multiplier, in that sequence. Default tolerance was ±20%. Closer-tolerance resistors had silver (±10%) or gold-colored (±5%) paint on the other end.
Preferred values
Early resistors were made in more or less arbitrary round numbers; a series might have 100, 125, 150, 200, 300, etc. Resistors as manufactured are subject to a certain percentage tolerance, and it makes sense to manufacture values that correlate with the tolerance, so that the actual value of a resistor overlaps slightly with its neighbors. Wider spacing leaves gaps; narrower spacing increases manufacturing and inventory costs to provide resistors that are more or less interchangeable.
A logical scheme is to produce resistors in a range of values which increase in a geometrical progression, so that each value is greater than its predecessor by a fixed multiplier or percentage, chosen to match the tolerance of the range. For example, for a tolerance of ±20% it makes sense to have each resistor about 1.5 times its predecessor, covering a decade in 6 values. In practice the factor used is 1.4678, giving values of 1.47, 2.15, 3.16, 4.64, 6.81, 10 for the 1–10 decade (a decade is a range increasing by a factor of 10; 0.1–1 and 10–100 are other examples); these are rounded in practice to 1.5, 2.2, 3.3, 4.7, 6.8, 10; followed, of course by 15, 22, 33, … and preceded by … 0.47, 0.68, 1. This scheme has been adopted as the E6 series of the IEC 60063 preferred number values. There are also E12, E24, E48, E96 and E192 series for components of ever tighter tolerance, with 12, 24, 96, and 192 different values within each decade. The actual values used are in the IEC 60063 lists of preferred numbers.
A resistor of 100 ohms ±20% would be expected to have a value between 80 and 120 ohms; its E6 neighbors are 68 (54–82) and 150 (120–180) ohms. A sensible spacing, E6 is used for ±20% components; E12 for ±10%; E24 for ±5%; E48 for ±2%, E96 for ±1%; E192 for ±0.5% or better. Resistors are manufactured in values from a few milliohms to about a gigaohm in IEC60063 ranges appropriate for their tolerance. Manufacturers may sort resistors into tolerance-classes based on measurement. Accordingly a selection of 100 ohms resistors with a tolerance of ±10%, may not lay just around 100 ohm (but no more than 10% off) as one would expect (a bell-curve), but rather be in two groups – either between 5 to 10% too high or 5 to 10% too low (but non closer to 100 ohm than that). Any resistors the factory measured as being less than 5% off, would have been marked and sold as resistors with only ±5% tolerance or better. When designing a circuit, this may become a consideration.
Earlier power wirewound resistors, such as brown vitreous-enameled types, however, were made with a different system of preferred values, such as some of those mentioned in the first sentence of this section.
Five-band axial resistors
Five-band identification is used for higher precision (lower tolerance) resistors (1%, 0.5%, 0.25%, 0.1%), to specify a third significant digit. The first three bands represent the significant digits, the fourth is the multiplier, and the fifth is the tolerance. Five-band resistors with a gold or silver 4th band are sometimes encountered, generally on older or specialized resistors. The 4th band is the tolerance and the 5th the temperature coefficient.
SMT resistors
Surface mounted resistors are printed with numerical values in a code related to that used on axial resistors. Standard-tolerance surface-mount technology (SMT) resistors are marked with a three-digit code, in which the first two digits are the first two significant digits of the value and the third digit is the power of ten (the number of zeroes). For example:
|334||= 33 × 104 ohms = 330 kilohms|
|222||= 22 × 102 ohms = 2.2 kilohms|
|473||= 47 × 103 ohms = 47 kilohms|
|105||= 10 × 105 ohms = 1.0 megohm|
Resistances less than 100 ohms are written: 100, 220, 470. The final zero represents ten to the power zero, which is 1. For example:
|100||= 10 × 100 ohm = 10 ohms|
|220||= 22 × 100 ohm = 22 ohms|
Sometimes these values are marked as 10 or 22 to prevent a mistake.
Resistances less than 10 ohms have 'R' to indicate the position of the decimal point (radix point). For example:
|4R7||= 4.7 ohms|
|R300||= 0.30 ohms|
|0R22||= 0.22 ohms|
|0R01||= 0.01 ohms|
Precision resistors are marked with a four-digit code, in which the first three digits are the significant figures and the fourth is the power of ten. For example:
|1001||= 100 × 101 ohms = 1.00 kilohm|
|4992||= 499 × 102 ohms = 49.9 kilohm|
|1000||= 100 × 100 ohm = 100 ohms|
000 and 0000 sometimes appear as values on surface-mount zero-ohm links, since these have (approximately) zero resistance.
More recent surface-mount resistors are too small, physically, to permit practical markings to be applied.
Industrial type designation
Format: [two letters]<space>[resistance value (three digit)]<nospace>[tolerance code(numerical – one digit)]
|Industrial type designation||Tolerance||MIL Designation|
Electrical and thermal noise
In amplifying faint signals, it is often necessary to minimize electronic noise, particularly in the first stage of amplification. As dissipative elements, even an ideal resistor will naturally produce a randomly fluctuating voltage or "noise" across its terminals. This Johnson–Nyquist noise is a fundamental noise source which depends only upon the temperature and resistance of the resistor, and is predicted by the fluctuation–dissipation theorem. Using a larger resistor produces a larger voltage noise, whereas with a smaller value of resistance there will be more current noise, assuming a given temperature. The thermal noise of a practical resistor may also be somewhat larger than the theoretical prediction and that increase is typically frequency-dependent.
However the "excess noise" of a practical resistor is an additional source of noise observed only when a charge flows through it. This is specified in unit of μV/V/decade – μV of noise per volt applied across the resistor per decade of frequency. The μV/V/decade value is frequently given in dB so that a resistor with a noise index of 0 dB will exhibit 1 μV (rms) of excess noise for each volt across the resistor in each frequency decade. Excess noise is thus an example of 1/f noise. Thick-film and carbon composition resistors generate more excess noise than other types at low frequencies; wire-wound and thin-film resistors, though much more expensive, are often utilized for their better noise characteristics. Carbon composition resistors can exhibit a noise index of 0 dB while bulk metal foil resistors may have a noise index of −40 dB, usually making the excess noise of metal foil resistors insignificant. Thin film surface mount resistors typically have lower noise and better thermal stability than thick film surface mount resistors. Excess noise is also size-dependent: in general excess noise is reduced as the physical size of a resistor is increased (or multiple resistors are used in parallel), as the independently fluctuating resistances of smaller components will tend to average out.
While not an example of "noise" per se, a resistor may act as a thermocouple, producing a small DC voltage differential across it due to the thermoelectric effect if its ends are at somewhat different temperatures. This induced DC voltage can degrade the precision of instrumentation amplifiers in particular. Such voltages appear in the junctions of the resistor leads with the circuit board and with the resistor body. Common metal film resistors show such an effect at a magnitude of about 20 µV/°C. Some carbon composition resistors can exhibit thermoelectric offsets as high as 400 µV/°C, whereas specially constructed resistors can reduce this number to 0.05 µV/°C. In applications where the thermoelectric effect may become important, care has to be taken (for example) to mount the resistors horizontally to avoid temperature gradients and to mind the air flow over the board.
Failure modes
The failure rate of resistors in a properly designed circuit is low compared to other electronic components such as semiconductors and electrolytic capacitors. Damage to resistors most often occurs due to overheating when the average power delivered to it (as computed above) greatly exceeds its ability to dissipate heat (specified by the resistor's power rating). This may be due to a fault external to the circuit, but is frequently caused by the failure of another component (such as a transistor that shorts out) in the circuit connected to the resistor. Operating a resistor too close to its power rating can limit the resistor's lifespan or cause a change in its resistance over time which may or may not be noticeable. A safe design generally uses overrated resistors in power applications to avoid this danger.
Low-power thin-film resistors can be damaged by long-term high-voltage stress, even below maximum specified voltage and below maximum power rating. This is often the case for the startup resistors feeding the SMPS integrated circuit.
When overheated, carbon-film resistors may decrease or increase in resistance. Carbon film and composition resistors can fail (open circuit) if running close to their maximum dissipation. This is also possible but less likely with metal film and wirewound resistors.
There can also be failure of resistors due to mechanical stress and adverse environmental factors including humidity. If not enclosed, wirewound resistors can corrode.
Surface mount resistors have been known to fail due to the ingress of sulfur into the internal makeup of the resistor. This sulfur chemically reacts with the silver layer to produce non-conductive silver sulfide. The resistor's impedance goes to infinity. Sulfur resistant and anti-corrosive resistors are sold into automotive, industrial, and military applications. ASTM B809 is an industry standard that tests a part's susceptibility to sulfur.
Variable resistors degrade in a different manner, typically involving poor contact between the wiper and the body of the resistance. This may be due to dirt or corrosion and is typically perceived as "crackling" as the contact resistance fluctuates; this is especially noticed as the device is adjusted. This is similar to crackling caused by poor contact in switches, and like switches, potentiometers are to some extent self-cleaning: running the wiper across the resistance may improve the contact. Potentiometers which are seldom adjusted, especially in dirty or harsh environments, are most likely to develop this problem. When self-cleaning of the contact is insufficient, improvement can usually be obtained through the use of contact cleaner (also known as "tuner cleaner") spray. The crackling noise associated with turning the shaft of a dirty potentiometer in an audio circuit (such as the volume control) is greatly accentuated when an undesired DC voltage is present, often indicating the failure of a DC blocking capacitor in the circuit.
See also
- Circuit design
- Electrical resistance
- Electrical impedance
- Iron-hydrogen resistor
- Shot noise
- Dummy load
- A family of resistors may also be characterized according to its critical resistance. Applying a constant voltage across resistors in that family below the critical resistance will exceed the maximum power rating first; resistances larger than the critical resistance will fail first from exceeding the maximum voltage rating. See Middleton, Wendy; Van Valkenburg, Mac E. (2002). Reference data for engineers: radio, electronics, computer, and communications (9 ed.). Newnes. pp. 5–10. ISBN 0-7506-7291-9.
- Farago, PS, An Introduction to Linear Network Analysis, pp. 18–21, The English Universities Press Ltd, 1961.
- Wu, F Y (2004). "Theory of resistor networks: The two-point resistance". Journal of Physics A: Mathematical and General 37 (26): 6653. doi:10.1088/0305-4470/37/26/004.
- Fa Yueh Wu; Chen Ning Yang (15 March 2009). Exactly Solved Models: A Journey in Statistical Mechanics : Selected Papers with Commentaries (1963–2008). World Scientific. pp. 489–. ISBN 978-981-281-388-6. Retrieved 14 May 2012.
- James H. Harter, Paul Y. Lin, Essentials of electric circuits, pp. 96–97, Reston Publishing Company, 1982 ISBN 0-8359-1767-3.
- Vishay Beyschlag Basics of Linear Fixed Resistors Application Note, Document Number 28771, 2008.
- C. G. Morris (ed) Academic Press Dictionary of Science and Technology, Gulf Professional Publishing, 1992 ISBN 0122004000, page 360
- Principles of automotive vehicles United States. Dept. of the Army, 1985 page 13-13
- "Carbon Film Resistor". The Resistorguide. Retrieved 10 March 2013.
- "Thick Film and Thin Film". Digi-Key (SEI). Retrieved 23 July 2011.
- Kenneth A. Kuhn. "Measuring the Temperature Coefficient of a Resistor". Retrieved 2010-03-18.
- "Alpha Electronics Corp.【Metal Foil Resistors】". Alpha-elec.co.jp. Retrieved 2008-09-22.
- Milwaukee Resistor Corporation. ''Grid Resistors: High Power/High Current''. Milwaukeeresistor.com. Retrieved on 2012-05-14.
- Avtron Loadbank. ''Grid Resistors''. Avtron.com. Retrieved on 2012-05-14.
- Digitally controlled receivers may not have an analog volume control and use other methods to adjust volume.
- "Decade Box – Resistance Decade Boxes". Ietlabs.com. Retrieved 2008-09-22.
- [dead link]
- Stability of Double-Walled Manganin Resistors. NIST.gov
- Klaus von Klitzing The Quantized Hall Effect. Nobel lecture, December 9, 1985. nobelprize.org
- "Standard Resistance Unit Type 4737B". Tinsley.co.uk. Retrieved 2008-09-22.
- Electronics and Communications Simplified by A. K. Maini, 9thEd., Khanna Publications (India)
- Audio Noise Reduction Through the Use of Bulk Metal Foil Resistors – "Hear the Difference"., Application note AN0003, Vishay Intertechnology Inc, 12 July 2005.
- Walt Jung. "Chapter 7 – Hardware and Housekeeping Techniques". Op Amp Applications Handbook. p. 7.11. ISBN 0-7506-7844-5.
- "Electronic components – resistors". Inspector's Technical Guide. US Food and Drug Administration. 1978-01-16. Archived from the original on 2008-04-03. Retrieved 2008-06-11.
|The Wikibook Electronics has a page on the topic of: Resistors|
|Look up resistor in Wiktionary, the free dictionary.|
|Wikimedia Commons has media related to: Resistors|
- 4-terminal resistors – How ultra-precise resistors work
- Beginner's guide to potentiometers, including description of different tapers
- Color Coded Resistance Calculator - archived with WayBack Machine
- Resistor Types – Does It Matter?
- Ask The Applications Engineer – Difference between types of resistors
- Resistors and their uses
- Thick film resistors and heaters
- A very well illustrated tutorial about Resistors, Volt and Current
- Beginners guide to resistors and resistance | 1 | 23 |
<urn:uuid:4771569a-592e-4e7e-9fa3-f033108a1c57> | Touch sensors can replace mechanical switches, but first you must understand noise, materials, and software.
Many product designers have thought, "My product uses four push buttons, a rotary switch, and 7-segment displays. How do I take the first step with touch-control replacements?" Semiconductor manufacturers now supply a wide range of touch-control ICs--from stand-alone devices through high-end microcontrollers--that can do the job. They also give engineers many design tools such as royalty-free software libraries and code examples, and they sell many types of evaluation and development kits. These manufacturers have used their own tools to create the kits, sensors, and firmware. So when you must replace electromechanical controls with touch controls, IC vendors can get you close to the finish line.
But winning the race comes at the cost of careful design strategies that require engineers to think about touch controls when a project starts, not as the project wraps up. I divide the main challenges engineers face into three broad categories: noise, mechanics and materials, and software. The information in this column concentrates on low-resolution button-, slider-, and wheel-type controls, not multi-touch or high-resolution touch-screen controls.
"Engineers must understand that touch sensors are analog and not digital devices," said Yann LeFaou, mTouch marketing manager at Microchip Technology. "That's the biggest pitfall, because engineers sometimes try to replace on-off mechanical switches with touch controls without giving ambient noise a second thought."
"On a circuit board you have a thin copper trace that leads to a finger-sized conductor," said Steve Gerber, director of human-interface products at Silicon Laboratories. "Interference could couple into the touch circuits through the back of the sensor or via that thin trace. So you must understand how to route the traces to reduce their coupling with interference sources. You can surround a trace for a finger-sized touch sensor with grounded traces. Normally we recommend a cross-hatch ground pattern that helps reduce the amount of capacitance that surrounds the sensor. You aim to shield the sensor from noise yet not increase its capacitance so much that the sensor IC can't detect a finger."
"In our ICs that include a capacitive-to-digital converter (CDC) for touch sensors, engineers can run a parallel trace along the sensor-signal line," said Gerber. "Then the chip performs a capacitance measurement on the parallel trace that ends just before the sensor pad. That capacitance value provides a baseline we can subtract from the capacitance measurement from the actual sensor to get a better reading."
"The CDC can make a capacitance measurement in 40 microseconds," said Gerber. "We multiplex sensor inputs to the CDC that includes two digital-to-analog converters with current outputs. One DAC pumps current into the sensor and the other pumps current into an internal reference capacitor. The sensor IC does this 16 times and we end up with a successive approximation of the external capacitor value based upon the known internal capacitance value, and of course the known currents. The CDC also includes an accumulator that sums as many as 64 values from a single sensor, so by averaging values, we help eliminate interference."
"At Cypress Semiconductor, we have two core sensing methods, CapSense Successive-Approximation (CSA) and CapSense Sigma-Delta (CSD) for our CapSense controllers," said Dirk Franklin, business unit director at Cypress. "Noise immunity, the ability to have robust sensing performance -- and no false triggers -- in noisy environments is critical. The sizes of devices and components continue to get smaller, so you must understand effects switch-mode power supplies, LCD drivers and RF transceivers. If you haven't done your design homework properly these noise sources can cause problems for the touch-sensing user interface."
Mechanics and Materials
"You must involve mechanical and production engineers in product-design decisions right away," noted LeFaou of Microchip. "You need to know as soon as possible the type of plastic you can use on top of the touch controls, whether you need a curved panel, and if the touch panel requires an air gap between some layers. Getting the mechanical design nailed down early makes everyone's life easier."
"Many customers choose to employ a touch-sensing interface because it gives them a fully sealed design," explained Rishi Vasuki, product marketing manager at Microchip. They don't want to punch holes in an enclosure and use electromechanical controls that need rubber seals or boots to protect them."
In some cases, engineers find it takes longer than expected to tune a touch-sensor system to ensure adequate performance. Things work well in their prototype but when they go into production, things have changed slightly. Perhaps stray capacitances have changed, a different PCB material has altered electronic characteristics, or the thickness of the overlay material has changed. Touch-sensor ICs and MCU software algorithms can overcome these changes, but engineers should understand they might see different materials used during R&D, design, and manufacturing processes.
"In one case, a manufacturer used a painted overlay on its capacitive touch user interface," said Cypress' Franklin. "When they went offshore for manufacturing they experienced some unexpected sensor failures. The paint used in production contained metallic particles that detuned the sensors. If the company had implemented our SmartSense auto tuning, the change in paint characteristics, which affected overall sensor capacitance, would have been accounted for, or retuned, automatically."
"Suppose you build a consumer product several factories around the world," said Franklin. "And you use local suppliers in Eastern Europe, Mexico, and China, for example. They all produce PCB and overlays for the same end product but all will have minor variations in materials and tolerances, so your sensor-tuning process must take into account those differences." Cypress’ SmartSense Auto-tuning eliminates the need to production retune based on the supply chain.
"You have two main ways to create touch-sensor electrodes; first you can draw your keypad with PCB layout software," said Eduardo Viramontes, an applications engineer in Freescale's Microcontroller Solutions Group. "Second, you can use a capacitive film that typically has three or four layers, one of which is a clear plastic with a known dielectric value. Then you have another layer with the touch sensors printed with a conductive ink. This type of sensor 'sandwich' also has another conductive layer -- a ground plane -- that helps protect the keys from electrical noise. We work with a third party, the Kee Group USA, that helps customers create capacitive-switch films. Because you can place flexible capacitive films on curved surfaces, they give you many product-design and human-interface options."
Some designers have had problems with capacitive-touch electronics because of their sensitivity to moisture and their inability to detect heavy gloves. "So we came up with the mTouch Metal over Cap technology," explained Microchip's LeFaou. "When you close the gap between two metal plates in a capacitor, the capacitance changes. A PCB with the sensor conductors has a thin spacer layer with a hole above each fixed sensor element. A thin piece of metal -- the second capacitor plate -- goes over the holes. Then when someone presses on a keypad, the metal pieces move closer together and the mTouch MCU detects the capacitance change and signals a pressed key. It's a bit more than a touch, but it requires only a small force to move the outer metal cover by about 10 microns."
This type of control can carry an embossed Braille legend and users can operate controls with heavy gloves or a stylus, even with moisture on the control surface. That means washing the surface or leaving moisture on it will not cause an MCU to detect a touched sensor. Nor will someone simply feeling or wiping the surface trigger an MCU action.
But, moisture might not have as big an effect as some think. "You can add a sensing region on a touch-sensor keypad just to detect whether you have water on the surface," said Parker Dorris, an application Engineer at Silicon Laboratories. "It could be just a guard ring that goes around the sensing region. If you see a change in capacitance across that guard you know you have something going on across the face of the keypad. But most often, designers rely on firmware that determines the characteristics of the change in capacitance and determines whether you have a finger, noise or moisture.
"A human is full of electromagnetic interference that we can sense with these capacitive-to-digital converters," noted Silicon Labs' Gerber. "When you put water on the surface it doesn't cause much noise because water's capacitance to ground is small and it doesn't absorb and transmit a great deal of electrical interference. So one of the ways to tell the difference between water and human touch is that, a human touch is inherently noisier and you can detect that higher noise level."
"Engineers might not realize the importance of software in touch-sensor controls," emphasized Freescale's Viramontes. "Software can determine whether a person has touched a control or if someone is wiping a cloth across the sensors. That's an issue with touch panels on industrial equipment or appliances that people clean regularly. And you might have high humidity and moisture on the surface. Good software can handle those conditions. Our Xtrinsic Touch Sensing Software (TSS) library includes algorithms that filter out wiping actions and moisture on touch sensors. The libraries work with our Touch Sensing Input hardware in the Kinetis MCU family and the TSS code gives the 8-bit S08 and 32-bit ColdFire V1 MCUs touch-sensor capabilities on general-purpose I/O pins."
"The MCUs that connect to touch sensors will have other tasks, too," stressed Vasuki of Microchip. "They might control an LCD, handle a USB interface, and operate a motor. When you try to integrate these functions you might find another designer plans to use internal peripherals assigned to touch sensors to measure temperatures or pressures and someone plans to use a USB port for streaming data. They each take processor time. As a project begins, designers might not think about sharing MCU resources. So in some cases, engineers might need a mini-scheduler or a small RTOS that lets software efficiently share peripherals and I/O devices."
"If designers create battery-powered products with touch sensors, they must figure out how to reduce power consumption," said Vasuki. "Mechanical switches draw no power and could wake up a low-power MCU. But in an analog touch-sensing design, the MCU must actively sense the analog inputs, so it can use more power. Several types of MCUs now lower power consumption and still perform touch sensing, which makes touch sensors more attractive in portable devices."
"We offer general-purpose, low-cost MCUs with the necessary peripherals to implement dedicated touch controllers, when someone needs to replace mechanical switches," noted Microchip's LeFaou. "But as the complexity of a design increases and designers have a complex application, they could choose to integrate touch controls into a high-end processor that will handle the application code. So there's a need for touch controls at both ends of the spectrum."
For further reading:
"Capacitive Sensing through Long Wires," AN529, Silicon Labs, 2010. http://www.silabs.com/Support Documents/TechnicalDocs/AN529.pdf.
CapSense SmartSense Basics," AN57316, Cypress Semiconductor. http://www.cypress.com/?rID=39252
"Four Button CapSense Design using CY8CMBR2044," AN59004, Cypress Semiconductor. http://www.cypress.com/?rID=45830
"Introduction to Capacitive Sensing," AN1101, Microchip Technology. http://www.microchip.com/stellent/idcplg?IdcService=SS_GET_PAGE&nodeId=1824&appnote=en531112.
"Printed Circuit Design Notes for Capacitive Sensing with the CS0 Module," AN447, Silicon Labs, 2009.http://www.silabs.com/Support Documents/TechnicalDocs/AN447.pdf
"mTouchTM Metal Over Cap Technology," AN1325, Microchip Technology, 2010. http://ww1.microchip.com/downloads/en/AppNotes/01325A.pdf.
"Techniques for Robust Touch Sensing Design," AN1334, Microchip Technology. http://www.microchip.com/stellent/idcplg?IdcService=SS_GET_PAGE&nodeId=1824&appnote=en549837.
Freescale Semiconductor Touch Sensors: http://www.freescale.com/webapp/sps/site/taxonomy.jsp?code=SNSPROXIMITY.
Guerra, Eduardo Viramontes, "Migration to Touch Sensing Software 2.0," AN4213, Freescale Semiconductor. http://cache.freescale.com/files/sensors/doc/app_note/AN4213.pdf.
Knirsch, Paulo, et al., "Touchsensing Software 2.0FTF Hands-on Session," Freescale Technology Forum, June 2010. http://cache.freescale.com/files/ftf_2010/Americas/FTF10_ENT_F0458.pdf. | 1 | 7 |
<urn:uuid:e3ce02e3-dbf5-494e-a348-9d3a7edb5bab> | A heart-healthy diet is low in salt and saturated fats and high in soluble fiber and nutrients. Making heart-healthy changes to your diet is easy if you make one change a month.
Sleep apnea may cause or aggravate heart disease by creating surges of adrenaline, which likely contribute to high blood pressure and strain the cardiovascular system.
Although new blood thinners such as dabigatran (Pradaxa) and rivaroxaban (Xarelto) do not require regular monitoring, people who manage well on warfarin may want to stay on warfarin until additional information on these new drugs becomes available.
People with aortic stenosis eventually need to have the aortic valve replaced. Whether it is done before or after symptoms appear depends on whether the goal is to prevent sudden death (a common consequence of aortic stenosis) or preserve quality of life.
The new wireless implantable cardioverter-defibrillator will likely be useful for people at risk for a life-threatening arrhythmia who do not need the device to pace a slow or fast heart rhythm.
Although nearly 80% of people who undergo angioplasty and stenting discuss the procedure with their doctor, less than 20% are told about possible drawbacks, and only 10% are told about other options.
Angiotensin-converting enzyme (ACE) inhibitors are important to the length and quality of life in people with heart disease or hypertension. Doctors do not agree on the value of these drugs in people undergoing bypass surgery.
The health of a person with a sudden cardiac arrest who is admitted to a hospital through the emergency department may be as important to the outcome as the quality of care the person receives.
Type O blood is associated with the lowest risk of coronary artery disease. People with type A, B, and AB have risks 5%, 10%, and 23% higher than those with type O, respectively.
Restless legs syndrome lasting more than three years appears to increase the risk of developing coronary artery disease in women.
Our concept of heart attack is changing
Diagnosing a heart attack requires a blood test for troponin, plus symptoms or evidence of heart attack on an ECG or imaging test. There are six different types of heart attack, each of which may be treated differently.
Ask the doctor: What does my doctor mean by "clearance for surgery"?
Cataract surgery puts very little strain on the heart. In the absence of symptoms of heart disease, the cardiovascular risks of cataract surgery are low.
Ask the doctors: Can I stop taking antiplatelet drugs to have my hip replaced?
For people with stents who need elective surgery, it's safest to take antiplatelet therapy for six to 12 months before stopping it in order to have an operation.
New approach to fighting heart disease
When lowering traditional risk factors fails to prevent a heart attack or stroke, targeting inflammation may help. Two clinical trials are beginning to test whether anti-inflammatory drugs can provide additional protection.
Aortic aneurysm: a potential killer
Fatty deposits in the aorta can cause this blood vessel to bulge outward, causing an abdominal aortic aneurysm, which can weaken and burst, usually without warning.
How sleep apnea affects the heart
In the sleep disorder called sleep apnea, sleep is interrupted many times a night. Sleep apnea appears to increase the risk of developing or dying from heart disease. Several treatments are available to halt sleep apnea and restore better sleep.
Can you die of a broken heart?
In people at risk for heart disease, the stress of losing a loved one greatly increases the risk of suffering a fatal or nonfatal heart attack. In healthy people, the stress can cause a serious but reversible condition that imitates a heart attack.
Some heart attacks go unrecognized
More than one-third of heart attacks produce no symptoms, yet these so-called silent heart attacks are as dangerous as heart attacks that do cause symptoms.
Heart Advances from Harvard: Risk factors for peripheral artery disease pinpointed
Type 2 diabetes, high blood pressure, past or current smoking, and high cholesterol are the four factors most closely associated with the development of peripheral artery disease-blockages in the arteries of the legs or arms.
Heart Beat: Aspirin may prevent blood clots in the legs from recurring
Blood clots in the legs are treated with several months of warfarin (Coumadin). After this period, low-dose aspirin may be a reasonable alternative to long-term use of warfarin for preventing another blood clot.
Should bypass surgery be done off-pump?
Anyone at risk for complications from the heart-lung machine during open-heart surgery should have the procedure performed without it, or "off-pump." For everyone else, whether or not to use the pump should be left up to the surgeon.
Ask the doctors: Why do I need to take blood thinners after a valve replacement?
After receiving a bioprosthetic heart valve, using warfarin for three to six months can lower the risk of a blood clot without increasing the risk of unwanted bleeding.
Ask the doctors: Is interval training safe for someone with heart disease?
High-intensity interval training offers an excellent cardiovascular workout. Anyone with heart disease who has not been very active and wants to try this approach should have a stress test first.
What we need: geriatric cardiology
Cardiologists lack evidence on how to treat older people with heart disease, who often take many medications and have other medical problems. This creates uncertainty about the risks versus the benefits of conventional treatments.
New thinking about stable heart disease
People with stable heart disease are at low risk for heart attack and may not need invasive treatment until significant chest pain is no longer relieved by medication.
Coming soon: many drugs in one pill
A pill that contains several different heart medications is being investigated as a way to help people take the medicines they need.
Don't ignore "mild" strokes
Strokes that produce only mild symptoms still damage brain cells. The accumulated damage from several mild strokes may be harmful and irreversible. Anyone who experiences the symptoms of stroke, no matter how mild, should call 911.
Generics as safe as brand-name drugs
Problems in manufacturing occasionally occur with both generic and brand-name drugs. FDA protocols enable problems to be caught quickly and remedied immediately.
Smoking interferes with bypasses
Smoking adversely affects the quality of leg veins used to bypass blockages in the heart's arteries, increasing the risk of graft failure.
Heart Advances from Harvard: Daily multivitamins do not prevent heart disease
Taking a daily multivitamin does not reduce the rate of heart attack, stroke, heart failure, revascularization, or death. It may lower the risk of cancer by 8%.
Heart Beat: ECG? There's an app for that!
People with heart disease will soon be able to transmit information about their heart rhythm to their doctor's office using an iPhone app.
Heart Beat: Diet matters after a heart attack
Eating a heart-healthy diet after a heart attack or stroke can dramatically lower the risk of having a fatal or nonfatal second heart attack or stroke, or developing heart failure.
Heart Beat: Smoking raises the risk of sudden death in women
Light-to-moderate smoking doubles the risk of sudden death in women. In women with heart disease, quitting smoking lowers the risk to that of nonsmokers in 15 to 20 years.
Bypass best for people with diabetes
People with diabetes often need a procedure to improve blood flow and avoid a heart attack. Those who undergo bypass surgery tend to live longer and are less likely to have a heart attack than those who undergo angioplasty.
Ask the doctors: Can a low magnesium level cause an arrhythmia?
Low magnesium levels can trigger the development of abnormal heart rhythms such as atrial fibrillation. Proper magnesium levels can be restored by taking magnesium supplements or eliminating a cause such as excessive alcohol use.
Ask the doctors: What can I do about varicose veins?
Varicose veins are not dangerous, but can cause legs to ache. Treatment options include compression stockings, lifestyle changes, surgical removal of the veins, and sealing them off from the circulation.
Building a better stent
Researchers continue to try new ways to prevent stents from clogging with cells from the artery wall or from attracting blood clots. Their goal is to develop a stent that can be used in any person to prevent a heart attack.
Advice on using painkillers safely
To minimize the risk of heart attack, people with heart disease who need a nonsteroidal anti-inflammatory drug (NSAID) for pain relief should start with the lowest dose of the least risky NSAID (naproxen) for the shortest period of time possible.
Many treatment choices for leg pain
Fatty blockages in the leg arteries can cause pain in the thigh or calf muscle that occurs when walking and disappears with rest.
How thyroid hormone affects the heart
Too little thyroid hormone can interfere with the heart's pumping strength and raise cholesterol and blood pressure. Too much thyroid hormone can cause the heart to race and raises the risk of developing heart failure.
Digoxin useful … with restrictions
Although digoxin is commonly used in atrial fibrillation, it can increase the risk of death and should be used only in very low doses and as a second rate-control drug when a safer drug is not adequate.
Promising news about heart failure
Serelaxin, a new drug derived from hormones that cause muscle relaxation, appears to reduce the symptoms of heart failure, organ damage from poor blood flow, and heart failure deaths.
Tests to evaluate risk of heart attack
Diabetes increases the risk of developing heart disease. Among people with diabetes, a variety of imaging tests can be used to estimate the risk of having a heart attack or stroke.
Heart beat: An egg a day may be A-okay
In people without diabetes, an egg a day does not increase the risk of heart attack or stroke. In all people, eating eggs lowers the risk of hemorrhagic stroke.
Heart beat: Overeating? Blame fructose
Fructose may not signal the brain to stop eating when the stomach is full, which could make it easier to overeat. Since sugar is half fructose, this offers another reason to cut back consumption of sugar-sweetened beverages and other sugary foods.
Heart beat: Aspirin after heart attack or stroke
Aspirin reduces the risk of a second heart attack or stroke by 20%, yet doctors prescribe it for less than half the people who might benefit from it.
Heart beat: It's never too late to quit smoking
Smoking takes about 10 years off a lifespan. Quitting smoking at any age can replace some of those years.
Tests your doctor may order to determine whether you have heart disease
Two types of tests are used to diagnose heart disease and risk of heart attack. The first determines if a blockage in the heart's arteries is affecting blood flow. If the answer is yes, a second type is done to pinpoint the location of the blockage.
Ask the doctors: Do I need valve surgery?
A leaky mitral valve causes the heart to pump twice as much causing it to enlarge and weaken. When the heart's pumping chamber starts to enlarge, it's time to repair or replace the valve.
Ask the doctors: Should I worry about prediabetes?
Type 2 diabetes damages blood vessels. People with "prediabetes" can lower the risk of progressing to diabetes by exercising 30 minutes a day, losing 5% of their weight, and changing the way they eat.
What it means when your doctor says…"You have atrial fibrillation"
Atrial fibrillation is an irregular heartbeat that may also be fast. Medications can be used to control the heart rate, regulate the rhythm, stop uncomfortable symptoms, and prevent a blood clot that may cause a stroke.
Watch your weight and your waist: Extra pounds may mean heart disease
At any age, extra weight, especially in the belly, increases the risk of developing diabetes, heart disease, stroke, and death. Losing weight is difficult, but there are many things you can do to drop pounds and keep them off.
Heart Advances from Harvard: RX for heart failure: coffee
Drinking two cups of coffee a day may protect against heart failure, likely by lowering the risk of high blood pressure and diabetes.
Heart Advances from Harvard: Antidepressants and arrhythmias
Examination of 11 antidepressants found that three (citalopram, amitriptyline, and escitalopram) may increase the risk of a potentially dangerous heart rhythm disturbance. No one with a history of arrhythmias should take these medications.
Heart beat: Free app predicts risk of heart attack
A new app can be customized with personal data to show heart risk and what you can do about it.
Heart beat: How CPR has changed
It isn't necessary to provide mouth-to-mouth breathing when doing CPR for someone who suddenly collapses. Chest compression alone may be better.
10 myths about heart disease
Believing outdated ideas about heart disease and its risk factors can be dangerous. Myth busting can help you plan the best path to a healthy heart.
Ask the doctors: What could a sharp pain in the upper back mean?
A tear in the aorta produces a sudden, sharp, extraordinary pain with a ripping sensation between the shoulder blades and down the back or in the front of the chest.
Ask the doctors: What's the relationship between blood pressure and knee pain?
Blood pressure can rise when activity levels drop. If your blood pressure was controlled until your activity level changed, increasing your activity may prevent the need for additional medication.
Ways to reduce your dependence on blood pressure medications
Many heart medications interact with certain foods, beverages, vitamins, dietary supplements, over-the-counter drugs, and other prescription medications, causing the heart drug to be more powerful or less effective.
Avoid these with heart medications
Chest pain caused by a heart attack is often described as a squeezing type of pressure that emerges slowly, rather than a sharp, quick pain. A heart attack may cause shortness of breath, sweating, nausea, lightheadedness, or loss of consciousness.
When a drug you take comes under fire
When a medication receives negative press, asking whether the drug caused the side effect, how many people it affected, and whether the side effect was worse than the disease the drug treats can help you weigh the drug's benefits against its risks.
How to prepare for a safe vacation
Before going on vacation, people with heart disease should make sure it's safe to fly and pack pertinent medical information and more than enough medications. Buying air ambulance or repatriation insurance before leaving home may also be wise.
Harvard Heart Advances: Weight gain after quitting smoking does not increase heart risk
Gaining weight after quitting smoking does not negate the benefits of quitting. One study showed a 50% drop in risk of fatal or non-fatal heart attack and stroke six years after quitting, regardless of weight gain.
Harvard Heart Advances: For best results, take your medications as prescribed
Faithfully taking blood pressure, cholesterol, and other heart medications as prescribed can reduce the risk of developing heart disease or its consequences. Taking these medications sporadically can increase the risk of heart attack or death.
Heart beat: Mental decline from arrhythmia
Atrial fibrillation, a common heart -rhythm disorder, may increase the risk of memory loss, a decline in thinking skills, or dementia.
Balancing bleeding vs. stroke risk when you have atrial fibrillation
Taking two kinds of blood thinners-anticoagulant and antiplatelet drugs-increases your risk of dangerous bleeding. But for some people with atrial fibrillation, the stroke-prevention benefit of this double therapy far outweighs the risk.
Ask the doctors: Is one heart test enough?
More testing doesn't always mean better care. Special tests such as nuclear imaging or cardiac ultrasound may be a good idea for people with certain worrisome symptoms or conditions.
Ask the doctors: Can surgery cause a heart attack?
Operations, even those that don't directly concern the heart, put people with heart disease at risk of heart-related complications.
Eat blueberries and strawberries three times per week
Eat a half cup of blueberries or strawberries three times each week. It's nutritious-and it may very well lower your risk of heart attack, a study of young and middle-aged women suggests.
Bypass or angioplasty with stenting: How do you choose?
Bypass surgery is considered the best treatment when all three coronary arteries are blocked. It's usually the best choice when the most important of the three arteries is blocked.
Sleep problems may increase the risk of heart attack and stroke
Poor sleep is linked to poor heart health. That goes double for people with sleep apnea, who stop breathing many times during the night.
Best medicine: The science of exercise shows benefits beyond weight loss
Exercise isn't all about weight loss. Researchers studying the effects of exercise find that it affects the body down to the subcellular level.
More evidence red meat may be bad for your heart
Meat eaters should be careful about taking L-carnitine supplements, new research suggests.
Harvard Heart Advances: Old hearts made young by natural substance
Harvard researchers led by Harvard Heart Letter Co-editor in Chief Dr. Richard Lee have found a naturally occurring substance in the blood of young mice that rejuvenates the hearts of old mice.
Harvard Heart Advances: Had problems with statins? Try them again.
Statin drugs are very good at lowering cholesterol, but side effects such as muscle weakness or muscle pain can make the drugs hard to tolerate. Fortunately, there are several statin drugs on the market.
Dr. Thomas Lee, the editor in chief of the Harvard Heart Letter, introduces an issue focused on the Million Hearts initiative, which aims to reduce heart disease.
The director of the Centers for Disease Control and Prevention discusses the goals of the Million Hearts initiative.
For people who have had a heart attack or are at risk of having one, a daily aspirin can be an effective prevention measure.
Blood pressure gets so much attention because uncontrolled hypertension is a significant risk factor for heart attacks and strokes.
Managing cholesterol involves more than just changing eating habits.
Smoking just a few cigarettes a day carries as much heart disease risk as smoking a pack a day, and secondhand smoke exposes nonsmokers to risk as well.
Fruits and vegetables provide a foundation for healthy eating.
Adding regular physical activity to your daily routine is easier than you might think.
Changes in health care laws and policies are aimed at helping people make choices that can improve their health.
The key components of weight loss are taking in fewer calories and, through physical activity, burning more calories than consumed.
A reduction in body mass index could offset age-related increase in heart disease risk.
A pocket-sized ultrasound device could give doctors more flexibility in monitoring patient progress.
A pacemaker-like device may provide help to people who are unable to control their blood pressure through standard treatments.
I am 77 years old, and my doctor recommended surgery to replace my aortic valve. He said my choice is between a mechanical valve and a pig valve. Which is the preferred option?
After a heart attack six years ago, I went on Lipitor because my doctor said it was proven to reduce the risk of a second heart attack. Three years ago, I switched to a generic to save money. Now that Lipitor is going generic, should I switch back?
Medications help the heart - if you take them
Strategies for getting the most from medications include ensuring they are obtained at the lowest possible cost, and working with a doctor or pharmacist to minimize side effects.
Another warfarin alternative for stroke prevention in people with a-fib
Newly available medications offer alternatives to warfarin that are easier for some people to take.
No-surgery aortic valve replacement okay for some, not all
A less invasive procedure for replacing the aortic valve means a shorter hospital stay and recovery, but for now its availability is limited to those who are unable to undergo open-heart surgery.
Living with an implantable cardioverter-defibrillator
Understanding the function of an implanted cardioverter-defibrillator device, and its impact on the recipient's life, will better prepare potential recipients to live with one.
Heart Beat: Smart at Heart bridges the emotional and physical shores of heart health
A book by a Massachusetts General Hospital cardiologist presents a view of heart health that aims to merge its physical and emotional components.
Heart Beat: Fruit and veggie diet may offset genetic risk for heart disease
Certain genetic variations increase a person risk of heart disease, but eating a diet rich in vegetables and fruits can counter this risk.
Heart Beat: Clots can form in stents years after placement
Clots can form in stents years after placement.
Heart Beat: Psoriasis again linked to heart disease
Research suggests that people with psoriasis are more likely to experience some form of cardiovascular disease.
Follow-up: Treating cardiovascular risk factors also aids ED
Analysis of clinical trials supports the belief that men with ED who treat their cardiovascular risk factors will also experience improved erectile function.
Ask the doctor: Should I worry about low nighttime blood pressure?
My systolic blood pressure is high in the morning (about 165), but in the evening it drops to below 100. I am taking two blood pressure medications daily and still experiencing seriously low blood pressure at night. What would you suggest?
Ask the doctor: Are hot flashes linked to heart disease?
I am 76 years old and still get hot flashes. Is it true that women who have hot flashes many years after menopause are more likely to experience heart problems than those whose symptoms end early in menopause?
Three (more) cheers for statins
A study boosts support for the ability of statins to help clear arteries of plaque, while two others reaffirm the drugs' safety.
Teamwork in angioplasty-bypass decisions
Updated guidelines for using bypass surgery or angioplasty to treat blocked cardiac arteries emphasize collaboration between these treatment groups.
Fun and exergames: Not just for kids anymore
Video games that involve physical activity can be beneficial for people of all age groups and abilities.
HDL cholesterol: Is higher really better?
A cholesterol study found that lowering LDL with a statin was more beneficial than attempting to boost HDL with niacin.
Small step forward for stem cells, giant leaps remain
In a very small study, stem cells from heart tissue helped boost pumping power in the hearts of heart attack survivors.
Heart Beat: Fixing faulty heart rhythms may help kidneys filter better
Treating atrial fibrillation with ablation may improve kidney function as well.
Heart Beat: A direct drug hit with alteplase busts up leg clots
Blood thinners keep clots at bay, but a different medication delivered by catheter directly to the site of a leg vein clot could eradicate it altogether.
Heart Beat: New from Medicare: Free counseling for heart disease and obesity
Medicare recipients are eligible for counseling programs designed to help people prevent heart disease or deal with obesity.
Heart Beat: Which drugs work best for resistant high blood pressure?
Many people with resistant hypertension are not receiving the most effective combination of medications for the condition.
If heart attack victims have to wait to be transferred to another hospital for emergency angioplasty, the delay is life threatening.
Ask the doctor: Is an egg a day okay?
Can eggs be part of a balanced, heart-healthy diet?
Ask the doctor: Will exercising less vigorously fix my heart rhythm problem?
I'm a fit 61-year-old who had bypass surgery 15 years ago. Recently, I've been having rapid heartbeats (what my doctor calls supraventricular tachycardia) during or just after vigorous exercise. Should I tone down my exercise?
Overuse, underuse, and valuable use
Most doctors do not misuse the resources available to them, but the potential for overuse exists. Informed patients should be aware of this and be prepared to question the necessity of tests or treatments.
What's at the heart of fainting?
Most fainting is not connected to cardiovascular issues, but anyone who experiences a fainting episode should be examined by a doctor.
Tales of two heart failures
Incidence of heart failure is split about evenly between two types, in which the heart's muscle has either weakened or stiffened.
Blood clots: The good, the bad, and the deadly
Blood clots inside the body can be dangerous, especially if a clot blocks an artery, or forms in one location and then is carried through the bloodstream to a lung or the brain.
Gut microbes may affect heart disease risk
Researchers are exploring a possible link between microbes that live in the digestive system and the development of atherosclerotic plaque.
Heart Beat: No beef with beef if it's lean
Red meat can be an acceptable part of a healthy diet, as long as it is very lean and is eaten in small amounts along with fruits, vegetables, and whole grains.
Heart Beat: Stroke risk rises in people who are depressed
Researchers found that people with a history of depression were more likely to suffer a stroke compared to people who were not depressed.
Heart Beat: Radiation for breast cancer linked to narrowed heart arteries
Radiation therapy for breast cancer can lead to narrowed coronary arteries, particularly for women with left-sided breast cancer.
Ask the doctor: In search of the wholly healthy muffin
I see in-store promotions for low-fat muffins almost everywhere. Is there such a thing as a heart-healthy muffin?
Ask the doctor: Are testosterone and cholesterol levels related?
I am 70 years old, and since I started taking testosterone to boost below-normal levels of that hormone, my LDL and HDL levels have dropped. What's the cholesterol-testosterone connection?
Take the hassle out of taking warfarin
Frequent testing or home monitoring may be options to take the hassle out of taking warfarin.
Arm-to-arm variations in blood pressure may warrant attention
Arm-to-arm blood pressure variations may warrant attention.
Exercise protects the heart when diabetes threatens
Excercise protects the heart when diabetes threatens.
Angina in the intestines mirrors what happens in the heart
Angina in the intestines mirrors what happens in the heart.
The wake-up-call heart attack
The wake-up-call heart attack
Heart Beat: Heart attack risk soars soon after losing a loved one
Grief raises heart attack risk.
Heart Beat: Satisfaction with job, family, sex life, and self may help the heart
Satisfaction with life keeps heart healthy.
Heart Beat: No more routine liver tests for statin users
Routine liver tests for statin users nixed.
Heart Beat: Everyday foods are top sources of sodium
Everyday foods are top sodium sources; more.
Heart Beat: Hidden atrial fibrillation is a possible culprit in mystery strokes
Hidden atrial fibrillation is a possible culprit in mystery strokes.
Ask the doctor: Should I worry about my homocysteine level?
Should I worry about my homocysteine level?
Ask the doctor: Is diet soda good or bad?
Is diet soda good or bad?
MRI and pacemakers: A risky mix
Unless you have an MRI-friendly pacemaker, a CT scan may be safer.
Ask the doctors: Can radiation damage the lungs?
Can radiation damage the lungs?
Ask the doctors: Fainting while doing chin-ups?
Fainting while doing chin-ups?
Ask the doctors: High BP and diabetes?
High BP and diabetes?
Warfarin users, beware of antibiotics
Warfarin may interact to increase your risk of internal bleeding.
Losing weight may require trial and error
No weight-loss method works for every heart patient.
Omega-3 may not protect the heart
Expert adivce in favor of omega-3 supplementation is mixed.
Blood pressure drugs compared
ACE inhibitors beat ARBs hands-down for survival benefit.
Robotics help stroke survivors walk again
Sophisticated devices enhance traditional rehabilitation techniques.
Soft drinks found to increase stroke risk
Study implicates both diet and sugar-sweetened sodas.
Heart beat: Harvard researchers identify genetic cause for a form of cardiomyopathy
Researchers identify genetic cause for one form of cardiomyopathy.
Heart beat: Atherosclerosis growth process explained
Atherosclerosis growth process explained.
Heart treatment designed just for you
Biomarkers help individualize care for heart attack and heart failure.
Ask the doctors: How can I prevent another clot?
How can I prevent a blood clot?
Ask the doctors: Is it okay to discontinue warfarin?
Is it safe to stop warfarin?
Caution: Watch your radiation exposure
Radiation can add up overtime to an uhealthy amount from heart tests.
Breakthrough in aortic valve treatment
A nonsurgical in aortic valve treatment gains favor.
Avoid another hospital stay
Two simple factors can keep you out of the hospital or put you back in.
Yes to heartburn meds plus clopidogrel
Evidence that PPIs interfere with blood thinning is weak.
The art of refining heart risk prediction
When it comes to refining heart risk prediction, one model comes out the winner.
Pollution may shorten lifespan
Pollution may shorten your life after a heart attack.
Heart beat: Stop-smoking drug may be safe
Stop-smoking drug may be safe after all.
Heart beat: FDA approves third weight-loss drug
FDA approves another weight loss drug.
Heart beat: One reason for diabetes identified
One cause for diabetes found.
Cancer treatments may harm the heart
Radiation therapy and chemotherapy are increasing the number of people who survive cancer. But they also cause cardiovascular disease in some of the people who get these therapies.
Ask the doctors: Should I replace my ICD?
Elderly people with an implantable cardioverter-defibrillator that has never "fired" can consider having the device turned off.
Ask the doctors: Do I really need a statin?
I'm a little overweight, but my cholesterol numbers aren't bad. Do I really need the statin my doctor wants me to take?
When an implantable defibrillator fails
Implantable cardioverter-defibrillators can stop a potentially deadly heart rhythm and restore a healthy one. Repeated bending and flexing can cause their leads to fail. Replacement or removal is an option.
Resuming sex after a heart attack
New evidence-based recommendations from the American Heart Association answers questions about resuming sexual activity after a heart attack that many people (and their doctors) are too embarrassed to bring up.
Dual antiplatelet therapy after stenting
After angioplasty and stent placement, it may not be necessary to take aspirin plus Plavix-what's called dual antiplatelet therapy-for more than a year.
Women are at higher risk for stroke than men.
Women with heart disease or atrial fibrillation are more likely than men to have a stroke.
Measure blood pressure in both arms
It's a good idea to have your blood pressure measured in both arms every so often. A difference between the two readings of more than 10 points may indicate increased cardiovascular risk.
Heart beat: Bleeding risk with aspirin must be balanced against benefit
An aspirin a day has been shown to lower the risk of a first heart attack in men and a first stroke in women, but it also increases the risk of major bleeding in the digestive tract or brain.
Heart beat: Heart problems from Z-Pak
The antibiotic azithromycin sometimes can trigger abnormal heart rhythms. Though uncommon, it is more likely to happen to people with heart failure, diabetes, or a previous heart attack.
Heart beat: An Aingeal to watch over you
For people in the hospital, a miniature sensor attached to the torso can transmit vital information about the heart and breathing to doctors and nurses in the hospital.
Update on genetic testing for heart disease
Genetic testing is useful for determining if someone has inherited a condition caused by a problem with a single gene, like hypertrophic cardiomyopathy. But it can't yet add much to predicting who will have a heart attack.
Ask the doctors: Is a high potassium level bad?
Kidney disease and some medications, like ACE inhibitors and NSAIDs such as ibuprofen, can cause potassium levels to be high. It is almost impossible to achieve high potassium levels simply by eating foods rich in potassium.
Ask the doctors: Why did my heart rate slow down?
The combination of a beta blocker and digoxin to treat atrial fibrillation can cause the heart rate to slow too much. Most people need a resting heart rate in the 60s to 80s to feel well.
The dangers of pulmonary hypertension
Pulmonary arterial hypertension occurs when arteries that supply the lungs become stiff and thick. New treatments are extending life for people with this chronic condition.
When a clot interferes with blood flow
Blood clots that form in the legs (deep-vein thrombosis) or lungs (pulmonary embolism) can be painful, and even deadly. Prompt treatment and good follow-up can minimize the danger.
Viagra and Cialis for heart failure?
Drugs used to treat erectile dysfunction (Cialis, Levitra, and Viagra) may also help ease heart failure. These drugs cause arteries to relax, which could help a failing heart pump more effectively.
A pacemaker to prevent fainting
For people who faint because their heart rates suddenly plummet (a condition called cardioinhibitory syncope), a dual-chamber pacemaker has been shown to reduce fainting episodes by 57%.
Use food to hold off vascular damage
Antioxidants from food-not from pills-can protect arteries and other tissues from damage caused by highly reactive compounds created when oxygen combines with other molecules. Colorful fruits and vegetables are great sources of antioxidants.
High HDL may not protect the heart
People with naturally high levels of protective HDL cholesterol have lower rates of cardiovascular disease. New studies suggest that boosting low HDL with medication may not pay off as much as lowering harmful LDL cholesterol.
The promise of a total artificial heart
A growing number of people with failing hearts are being given total artificial hearts as they wait for donor hearts to become available.
Heartbeat: New cholesterol drug is promising
Adding an investigational new drug called AMD 145 to a statin dramatically lowers levels of harmful LDL cholesterol.
Heartbeat: Heart attack accelerates plaque
A heart attack or stroke triggers an immune response that boosts inflammation and speeds the development of atherosclerosis in artery walls. This may explain why heart attack or stroke victims are at risk for repeat events.
Heartbeat: Drug-eluting stents being misused
Many people who don't need a drug-eluting stent during angioplasty get one anyway. More appropriate use would save $200 million a year in the cost of the stents plus the medications that must be taken afterwards.
Treatments for heart failure
Medications and devices can help many people with heart failure with low ventricular ejection fraction live longer with a better quality of life. But not all therapies are right for everyone, and treatment must be individualized.
Ask the doctors: Do I have heart disease?
Developing chest pain while taking an exercise stress test is worrisome.
Ask the doctors: Could I have serious kidney damage?
An increasing creatinine level could indicate problems controlling diabetes and blood pressure. Measuring the kidneys' glomerular filtration rate offers helpful information.
When should we treat blood pressure?
In addition to blood pressure, doctors are now considering all factors that increase an individual's cardiovascular risk as a guide to whether to begin antihypertension medications.
Carotid stenosis treatments compared
The surgical procedures known as endarterectomy and noninvasive stenting are equally safe and effective treatments for preventing stroke in people with blocked or narrowed carotid arteries.
Women: Cardiac rehab key to recovery
Women are less likely than men to take advantage of cardiac rehabilitation after a heart attack, bypass surgery, or angioplasty even though women benefit more from it.
Muscle aches and pains from statin use
People who take statins without experiencing muscle pain or discomfort do not need regular blood tests to check for muscle damage. Anyone who takes a statin and experiences severe muscle pain and weakness, however, should seek medical help immediately.
New devices compensate for foot drop
When stroke causes a person to have trouble lifting or moving a foot (foot drop), two new devices can help. Both stimulate the peroneal nerve so the weak foot lifts, rather than drags.
Aspirin not effective in some people
When people who take aspirin suffer a heart attack or stroke, they are said to be aspirin-resistant. But this condition is rare, and most cases can be attributed to failure to take aspirin as prescribed.
Heart-healthy menu choices now clear
The American Heart Association's "Heart-Check" menu symbol ensures that an entree or meal meets specific requirements for calories, cholesterol, saturated fat, trans fats, and sodium.
Heart advances from Harvard: Fat that's bad for the heart, brain
Women who eat a diet high in saturated fat are more likely to develop memory loss and thinking problems than those who eat more monounsaturated and polyunsaturated fats.
Heart advances from Harvard: ER evaluation methods compared
Contrast-enhanced computed coronary tomographic angiography (CCTA), a noninvasive technology, accurately diagnosed or ruled out heart attack much faster than standard evaluation methods.
Heart advances from Harvard: Impact of inactivity assessed
Physical inactivity is responsible for 6% of coronary artery disease, 7% of diabetes, 10% of breast and colon cancers, and 9% of premature deaths worldwide. Increasing activity by 10% to 25% could prevent up to 1.3 million deaths per year.
Heart Beat: "Smart pill" won't let you forget to take your medications
About 50% of Americans don't take their medications properly. A new "smart" pill that reveals whether a medication has been taken as prescribed could improve medication taking.
Should you have stenting or bypass surgery?
Angioplasty and bypass surgery can both restore blood flow to the heart. Which one is better depends on factors like the location and severity of the blockages, symptoms, and heart function.
Ask the doctors: Should I get an LVAD?
A left ventricular assist device (LVAD) can greatly improve quality of life for people with heart failure who are too old for a heart transplant.
Ask the doctors: Do I have diabetes?
A high blood glucose level may signal increased risk of diabetes, but in the absence of common symptoms of diabetes, a hemoglobin A1c test may provide a more accurate diagnosis.
Vitamin D: Cardiac benefits uncertain
Low vitamin D levels have been linked to increased risk of cardiovascular events or death, but there's no evidence that taking vitamin D supplements offers protection. Vitamin D consumption should be limited to 600-800 international units a day.
Choosing a heart surgeon
Information on heart surgeons is widely available on the Internet. But a Harvard study shows that most consumers often do not correctly interpret the data the way they are presented.
Protect your heart with a flu shot
The influenza vaccine dramatically reduces the number of heart attacks and cardiovascular deaths. Anyone with heart disease should get a flu shot (not the nasal drops) every year.
Beware of "holiday heart syndrome"
Overdrinking, particularly binge drinking, can trigger the fast, erratic heart rhythm known as atrial fibrillation. Because this tends to occur during holiday celebrations, the condition is known as "holiday heart syndrome."
Unexplained shortness of breath
For unexplained shortness of breath, cardiopulmonary exercise testing may solve the mystery. Shortness of breath can often be eliminated or reduced with medical or surgical treatment, or cardiac or pulmonary rehabilitation.
Medications Management: Generic heart medications
Generic heart medications are equivalent to their brand-name versions and are safe for people with heart disease to use.
Heart Advances from Harvard: HDL and heart attack
High LDL cholesterol levels are known to increase the risk of heart attack, and lowering LDL levels has been proven to help protect against heart attack.
Heart Beat: Childhood abuse raises heart risk
Childhood abuse is bad enough on its own, and now it appears it may also increase the risk of developing cardiovascular disease earlier than usual.
Breakthrough in mitral valve treatment
New devices are enabling doctors to repair loose mitral valves without the need for open-heart surgery. Several devices are in development, and one of them, MitraClip, is now being tested in clinical trials in the United States.
Ask the doctors: Do I really need a statin?
The risk of heart complications in people with kidney disease may be reduced as much as 20% by lowering cholesterol with medications.
Ask the doctors: Can I have heart surgery while taking pain medication?
Addiction to pain medication makes it hard to monitor and manage pain after heart surgery.
Ask the doctors: Is it safe to stop taking my antiplatelet therapy?
In people who received a cardiac stent more than a year earlier, it is safe to stop clopidogrel use before elective surgery, and possibly permanently.
Stem cell therapy for heart disease
Researchers are aiming to find a way to repair damaged hearts with stem cells. Many uses for stem cell therapy are being pursued, but its future likely lies in the prevention and treatment of heart failure.
Treating resistant hypertension
When blood pressure remains high despite the use of three antihypertensive medications, additional medications need to be added until blood pressure responds. Restricting salt and increasing exercise may help.
Recovering from coronary bypass surgery
Five strategies can pave the way to a smooth recovery after bypass surgery: staying as active as possible before surgery, quitting smoking, eating a healthy diet full of protein, staying positive, and taking your heart medicines as prescribed.
Choosing options for life-sustaining care
For individuals with a serious disease like heart failure, making decisions in advance about life-sustaining measures and medical decision-making can help ensure their wishes are followed.
Green tea may lower heart disease risk
Green tea can significantly lower LDL cholesterol and triglycerides, and this may explain why green tea drinkers have a lower risk of coronary artery disease and death from heart disease and stroke.
Heart Advances from Harvard: Radial artery grafts prove durable
In coronary artery bypass grafting (CABG), the internal mammary (or thoracic) artery is the graft of choice for bypassing blockages in the main coronary artery, because it tends to remain open and functioning well for many years.
Heart Advances from Harvard: Potential cure for type 1 diabetes
A study conducted at Massachusetts General Hospital has confirmed that a vaccine designed to raise levels of tumor necrosis factor (TNF) temporarily restores insulin secretion in people with type 1 diabetes.
Heart Advances from Harvard: CABG vs angioplasty in kidney disease
Older people with chronic kidney disease often develop heart disease, since atherosclerosis can affect the arteries of both organs.
Top five habits that harm the heart
Five poor heart habits are responsible for the majority of heart disease, but their opposite, healthy behaviors can help protect the heart and improve overall health.
Conversation with an expert: Plavix: What you need to know
Dr. Patrick O'Gara, a member of the Health Letter's editorial board, talks about safety issues regarding the use of Plavix after angioplasty.
Tiny pumps can help when heart failure advances
A ventricular assist device helps boost the heart's pumping capacity in people with advanced heart failure, allowing them to resume some normal mobility and activities.
On the horizon: Squeezing the arm to protect the heart
Preconditioning the heart by using a blood pressure cuff to halt and release blood flow could protect heart muscle during surgery or a heart attack.
On the horizon: Exercise at rest - no longer an oxymoron?
A bedlike device that shakes the body head-to-toe stimulates blood vessels and improves blood flow, which may benefit people with heart failure who have difficulty exercising.
On the horizon: Nanoburrs seek, heal injury in artery
Microscopic particles that contain medication could one day be used to repair damaged arteries.
On the horizon: A pacemaker to lower blood pressure
People who are unable to control their high blood pressure through diet, exercise, and medication may benefit from a pacemaker-like device that stimulates the body's sensors for regulating blood pressure.
On the horizon: An ICD that works without wires
A new type of implantable cardioverter-defibrillator that works without wires may be an option for younger people living with heart rhythm problems.
On the horizon: Removing fat makes HDL ("good cholesterol") even better
A process called delipidation, in which cholesterol and fats are removed from HDL particles that are then returned to a person's bloodstream, stimulates the HDL to attack cholesterol in blood vessels more effectively.
Ask the doctor: Do I really need surgery to fix my aortic valve?
I have had a leaking aortic valve for many years. I get an echocardiogram every six months. After the latest one, my doctor told me that my heart was enlarging. He wants me to have surgery to replace the valve. Should I do this at age 68?
Ask the doctor: Racing heart and pneumonia
When someone has pneumonia, is it common for the heart rate to fluctuate wildly?
Ask the doctor: Is 10,000 steps a day a good target for an older person?
My daughter gave me a pedometer and told me to walk 10,000 steps a day. When I wore it for a while, I realized I was taking only about 3,000 steps a day. Is 10,000 a realistic number for someone my age (70 years)?
Acetaminophen may boost blood pressure
A small Swiss study found that daily use of acetaminophen can cause an increase in blood pressure, which is of concern to people with cardiovascular disease.
Magnesium helps the heart keep its mettle
Magnesium is essential to the body's proper functioning, but most people don't get enough of it. A healthy diet and a vitamin-mineral supplement should provide the necessary amount.
Protect your heart during dental work
In the past, people taking an antiplatelet medication were usually told to stop taking it temporarily before dental surgery, but doing so may increase the risk of a heart attack or stroke in the weeks following the procedure.
Coping with shortness of breath
Chronic shortness of breath is a common adjunct to heart disease. Researchers have formulated new guidelines to identify and treat this condition in those who suffer from it.
New drug offers warfarin alternative for atrial fibrillation
People who take the blood thinner warfarin have a new option, Pradaxa, which is not affected by diet and does not require its dosage to be fine-tuned.
Heart Beat: Taking the myth (and, alas, some of the romance) out of chocolate and the heart
Everyone wants to believe that eating chocolate will offer some protection to the heart and arteries, but so far the mdeical evidence to support this idea isn't there.
Brief updates on coughing as a side effect of a type of blood pressure medication, waist circumference as an indicator of longevity, and a possible correlation between multiple miscarriages and increased risk of heart attack.
Ask the doctor: Could a sudden gain in weight be caused by hot weather?
At 80 I am in relatively good health. During a period of extreme heat this summer, my ankles were more swollen than usual, and my weight jumped three pounds in just two days. Was that because of the heat, or did salt have something to do with it?
Ask the doctor: How much psyllium is needed to lower cholesterol?
What amount of psyllium should I take each day to lower cholesterol?
Ask the doctor: Could getting a pacemaker have damaged my vagus nerve?
I recently had a pacemaker implanted. While the process was going on, I felt a pulsation. I still feel it months later. My primary care doctor thinks that my vagus nerve could have been damaged when the pacemaker was implanted. Is that possible?
Ask the doctor: Is it okay to have an MRI after getting a stent?
I needed angioplasty in 2007 and had a stent implanted during the procedure. Due to another health problem, my doctor now wants me to have an MRI. Could this cause any problem with the stent?
11 ways to prevent stroke
Some risk factors for stroke, such as family history and ethnicity, cannot be changed, but attention to factors like weight, blood pressure, cholesterol, and physical activity can significantly reduce stroke risk.
Fish oil questioned as treatment for heart disease
Results of several studies suggest that taking fish oil does not benefit people who already have some form of heart disease, but eating fish is still likely to offer health benefits to most people.
Hybrid heart surgery expands options
People who need more than one type of heart procedure may be able to have them done in a hybrid operating suite, reducing risk and some recovery times.
Transfusion and heart surgery: Only when needed
The practice of routinely giving blood transfusions to patients during and after heart surgery is being challenged by research findings.
Heart Beat: When stocks crash, heart attacks go up
Researchers correlated the stock market's woes in 2008 and 2009 with an increase in heart attacks.
Heart Beat: Eyelids as windows into the heart
Small, yellow skin lesions that develop on the eyelids may be an indicator of heart disease.
Heart Beat: Dual-chamber pacemaker helps heart failure
Combining a biventricular pacemaker and an implantable cardioverter-defibrillator may help prevent death from cardiac arrest better than the ICD alone.
Heart Beat: Stay lean, live longer
Despite studies that suggested those who gain weight with age might live longer, having a body mass index in the normal range still correlates with a lower death rate.
Heart Beat: Rheumatoid arthritis should heighten heart awareness
People with rheumatoid arthritis may be more likely to develop heart problems.
Ask the doctor: Why does my blood pressure rise in the afternoon?
I am a 50-year-old woman with newly diagnosed high blood pressure. My pressure seems to be normal in the morning, averaging 121/74, but in the afternoon the upper number is often in the 140s to 150s. Is this normal, especially while on a medication?
Ask the doctor: Is high potassium a problem?
You have written about low potassium in the blood and ways to improve it, but I never read about too much potassium in the blood. Can you tell me why it happens and what is done about it?
Same-day angioplasty feasible, safe
People who undergo an angioplasty typically stay in the hospital overnight, but at some hospitals patients who meet strict criteria are now being allowed to go home the same day.
Long-term look at aneurysm repair
A study comparing the two methods of repairing an abdominal aortic aneurysm found differences in survival rates after the first month, but after several years survival rates for both groups were approximately the same.
Study suggests caution on statins after a bleeding stroke
People who take a statin after a hemorrhagic stroke may be at a slightly higher risk of having another stroke, but this potential risk may be outweighed by the protection against heart attack provided by a statin.
Hypertrophic cardiomyopathy: Optimism tinged with caution
Hypertrophic cardiomyopathy is a thickening of the heart's inner dividing wall that can weaken the heart's ability to pump blood effectively. Though its effects vary considerably, many people are able to live normally with the condition.
Heart Beat: Family matters: Your parents' heart health affects yours
Research suggests that a family history of heart attack is another factor that should be considered in estimating a person's own heart attack risk.
Brief reports on temporary heart damage caused by running marathons, the effect of kidney disease on the necessary dose of warfarin, and a possible increased risk of heart trouble for women taking a breast cancer drug.
Ask the doctor: Are my blood pressure and heart rate changing normally during exercise?
Sometimes I walk while wearing my blood pressure cuff. At first my systolic blood pressure rises while my heart rate hardly changes. But when I start walking faster, my pressure stays steady while my heart rate increases to 110. Is this a normal pattern?
Ask the doctor: Does smoked fish contain omega-3 fats?
I like smoked salmon and kippered herring, and thought that eating them was good for me. But I read in another health newsletter that the smoking process destroys all the heart-healthy omega-3 fats. Is that true?
Ask the doctor: What is venous insufficiency?
I have been diagnosed with venous insufficiency. What does that mean?
Gloomy forecast on heart disease
The American Heart Association is predicting significant increases in heart disease among baby boomers, along with associated health care costs. Following better health habits can help prevent heart disease.
Let's put the "public" in public defibrillation
Many people are reluctant to use an emergency defibrillator to attempt to revive a person in cardiac arrest, but the instructions are clear and simple, and taking action could save a person's life.
Two-way street between erection problems and heart disease
Heart disease and erectile dysfunction are often related conditions. Instance of either should prompt a conversation with your doctor about the other, as well as lifestyle choices that can improve sexual function and cardiovascular health.
Hysterectomy linked to increase in heart disease
Women who have a hysterectomy, especially those under 50 who also have their ovaries removed, seem to be at increased risk of heart disease.
Pre-sports check-up can prevent sudden death among athletes
Young people who want to take part in athletic activities should have a pre-sports checkup to identify an potential conditions or irregularities that could lead to a sudden cardiac event.
Heart Beat: HDL function, not just amount, could affect artery health
Research suggests that some HDL cholesterol is stronger, enabling it to pull more cholesterol out of white blood cells.
Heart Beat: Recycling effort keeps hearts ticking
A program is collecting and donating medical goods, including pacemakers and other implanted devices, to people in less-developed countries who would not be able to afford them.
Heart Beat: Exercise to strengthen heart and muscles best for diabetes
Combining aerobic exercise and strength training is better for people with diabetes than either form of exercise alone.
Ask the doctor: Is it okay to drink alcohol if I have an implanted defibrillator?
You have said that alcohol can cause heart rhythm problems. I have an implanted defibrillator. Is it okay for me to drink alcohol?
Ask the doctor: Headache and stroke
I have heard that one symptom of a stroke is "the worst headache you can imagine." I recently had a migraine that was so much more painful than previous ones that I worried it was a stroke. Is there any way to tell a migraine from a "stroke headache"?
Cut salt - it won't affect your iodine intake
Concern about sodium intake has raised the question of whether cutting back on salt could put people in danger of not getting enough iodine, but this should not be a cause for concern.
Specialized care improves stroke survival
Care at a specialized center may provide a better chance of surviving a stroke, even if it requires extra travel time to reach.
Weight-loss surgery can help - and harm - the heart
Although weight-loss surgery benefits the body with improvements in blood pressure, blood sugar, and cholesterol levels, the procedure stresses the heart significantly, so this risk must be weighed if considering the surgery.
Who needs an implantable cardioverter-defibrillator?
Thousands of people receive implantable cardioverter-defibrillators each year, but not everyone who receives the device really needs it, and some people would be better off pursuing other treatment avenues.
Heart Beat: The shape of cardiovascular risk
Excess body fat, regardless of whether it is carried on the midsection or thighs, is bad for the heart and for overall health.
Heart Beat: Mediterranean-type diet can fix multiple problems
Eating a Mediterranean-style diet can help with a number of health issues.
Brief reports on hypertension statistics, a theory about why some people show more of an HDL cholesterol benefit from exercise than others, and more about the connection between depression and heart disease.
Ask the doctor: My defibrillator has never "fired." Should I keep it or have it taken out?
My doctors recommended I get a defibrillator as "insurance," but I have had it for eight years and it has never gone off. My doctor wants to put in a new battery. At age 86 I'd rather not. Could I just leave the device in place or have it taken out?
Ask the doctor: Is hip replacement surgery dangerous for my heart?
I am a 72-year-old with diabetes, and I need to have a hip replaced. Does my diabetes make this surgery too dangerous for my heart?
Ask the doctor: What is pericardial effusion?
My doctor told me I have pericardial effusion. I know it has something to do with fluid in the heart. Can you tell me more?
Ask the doctor: Why does my heart sometimes feel like it stops, then starts up again with a jerk?
I am 92 and have atrial fibrillation and high blood pressure, both controlled by medication. Every so often when I am relaxing after dinner, my heart feels like it stops and then starts up again with a jerk. Is this something I should worry about?
Surviving a heart attack: A success story
Heart attack survival rates are much higher than they were a few decades ago, thanks to greater awareness, new clot-busting drugs, and expanded access to specialized cardiac treatment centers.
Measuring blood pressure: Let a machine do it
Participants in a research trial who had their blood pressure taken by a machine had lower readings than those who had their pressure taken by a doctor.
New dietary guidelines offer sketch for healthy eating
The latest edition of the government's Dietary Guidelines for Americans tries to nudge people toward healthier eating habits and patterns.
Heart Beat: Another yellow light for calcium supplements
The debate over calcium supplements continues, with a new analysis suggesting that people who take them may have an increased risk of heart attack or stroke.
Heart Beat: Unexpected benefit for digoxin?
Researchers found that digoxin, a drug used to treat heart problems, may also be effective at preventing the growth of prostate cancer cells.
Heart Beat: Emotional control and the heart
A positive emotional outlook may lead to a lower risk of heart disease.
Heart Beat: Trends in high cholesterol and statin use
The effectiveness of statin drugs is contributing to a reduction in the number of Americans with high cholesterol.
Heart Beat: Heart-health questions stump many
A poll by the American Heart Association found that many people do not know some basic facts about heart health.
Further information about cardiac rehabilitation programs for people with heart disease and yoga as a way to reduce episodes of atrial fibrillation.
Ask the doctor: Would moving to a lower altitude help my heart rate?
I have bradycardia. I live at 5,765 feet - would moving to a lower altitude help my heart rate? Recent cardiac tests were normal. My cardiologist said I don't need a pacemaker, and to keep on doing what I've been doing. At age 85 I walk three miles a day.
Ask the doctor: Are advanced blood tests needed for coronary artery narrowing?
I had a stent put in at age 59. Thanks to diet, exercise, and medications, my cholesterol numbers are excellent. Recent tests showed ischemia and new blockages requiring two additional stents. Why do my arteries keep getting clogged despite my efforts?
Aiming for ideal improves heart health
The American Heart Association hopes that its definition of ideal cardiovascular health will encourage people to strive to be healthier.
Trial clouds use of niacin with a statin
A clinical trial of niacin in combination with a statin to lower cholesterol was stopped early because of safety concerns.
Update on aspirin
For people who have not had a heart attack, the question of whether or not to take a daily aspirin is a matter of weighing potential benefits against potential harm.
What's the best target for blood pressure when it is high?
Lowering blood pressure is a primary goal for those with hypertension, but if blood pressure goes too low in someone with high blood pressure, it can cause the heart to get overworked.
Sliding scale for LDL: How low should you go?
Research has lowered the target for the level of "bad" LDL cholesterol, but an individual's cardiovascular risk should factor into determining the appropriate target.
Heart Beat: Research continues to serve up heart perks for coffee drinkers
Evidence of coffee's cardiovascular benefits continues to accrue.
Heart Beat: "Just in case" artery scans offer little or no payoff, possible harm
Carotid ultrasound tests are not necessary or helpful for people who are in good health and not experiencing any warning signs of stroke risk.
Heart Beat: No connection between ARBs and cancer
The Food and Drug Administration has concluded that angiotensin-receptor blocker medications used to treat high blood pressure do not increase the risk of developing lung cancer.
Ask the doctor: Is swimming in cold water okay for my heart?
I love to swim in the ocean for 20 or 30 minutes. The water is cold (55? F) but I don't mind. I'm almost 80. I had my mitral valve repaired five years ago, and my heart rate is sometimes irregular. Are my cold-water swims okay for my heart?
Ask the doctor: Should I be taking a statin?
I had a heart attack three years ago at age 78. My doctor started me on lisinopril, carvedilol, and aspirin. My total cholesterol is 190, and my LDL is 128. Should I be taking a statin?
Ask the doctor: What should I do about high triglycerides?
On my last blood test, my triglycerides were 280. Should I be worried about that? My doctor wants me to start taking something called Lopid. Is there another solution?
COURAGE not followed by action
The results of the COURAGE trial were expected to change the attitude of doctors regarding angioplasty procedures, but it seems that this shift has not happened.
What to do when blood pressure resists control
Resistant hypertension can be brought down to a safer level, but it requires extra effort and careful attention.
Peripheral artery disease often goes untreated
Peripheral artery disease often goes untreated until it is too late, and research suggests that millions of people with peripheral artery disease are not taking the appropriate medications to control it.
Abundance of fructose not good for the liver, heart
A high intake of fructose, in foods like soda, pastries, and breakfast cereals, can lead to a buildup of fat in the liver, as well as an increase in bad cholesterol, blood pressure, and other factors that are bad for the heart.
Heart Beat: "Polypill" test raises questions
Researchers are still exploring the concept of a pill that combines aspirin, a statin, and two or more blood pressure medications.
Heart Beat: Caution advised on Chantix use
The FDA is warning people with heart disease that using Chantix to try to quit smoking increases their risk for cardiovascular problems.
Heart Beat: Another day in the sun for olive oil?
Another study promotes the heart-healthy virtues of olive oil, but it's also important to be mindful of the broader impact of diet on cardiovascular health.
Heart Beat: Failing hearts linked to broken bones
Researchers believe there may be a connection between heart failure and an increased incidence of broken bones.
Heart Beat: Pause in CPR before shock reduces survival
When pausing CPR before administering a shock from a defibrillator, the shortest possible pause will help increase the cardiac arrest victim's chances of survival.
Follow-up: Sodium/potassium ratio important for health
Most people now consume more sodium than potassium, but it should be the other way around. The ratio is important to heart health.
Ask the doctor: Do I need to take warfarin for occasional lone atrial fibrillation?
I'm 64 and have had lone atrial fibrillation for about a decade. My doctor wants me to take a blood thinner, but I'd rather not do this. Should I follow her recommendation? Also, is it possible that endurance-type exercise led to my atrial fibrillation?
Ask the doctor: How do I check my heart rate?
My doctor told me to check my heart rate when I feel certain symptoms, but I don't know how to do it? Can you explain?
Blood vessel disease linked to dementia
Blood vessel problems can have a significant effect on the health of the brain, including contributing to the development of dementia.
Angioplasty via wrist artery safe, effective
A trial found that using the wrist as a point of entry for angioplasty procedures is as safe and as effective as using the femoral artery.
The smartphone will see you now
Smarthpone apps assist in keeping track of personal health data, provide general information, offer emergency assistance, and more.
More to the story than alcohol = heart protection
Alcohol's benefits to cardiovascular health are well known, but even moderate consumption comes with an increased risk for a stroke.
Heart Beat: Nature trumps nurture for heart disease
According to a Swedish study, the influence of genes on the development of heart disease is stronger than environment.
Heart Beat: Water exercise safe for troubled hearts
Water exercise is beneficial and safe for those with heart disease.
Heart Beat: Repeat "zaps" often needed to stop atrial fibrillation
People with atrial fibrillation who undergo an ablation procedure may need two or more of the procedures to ease the arrhythmia.
Ask the doctor: Can exercise damage my pacemaker's wires?
I had a pacemaker implanted a few months ago. I am planning to join a gym, but I am afraid of damaging the wires with some of the presses and pull-down movements I would have to do to work out. Are there any exercises or movements I should avoid?
Ask the doctor: Compression stockings for a long-distance flight?
My 61-year-old mother plans to take a long plane trip. Her legs usually become swollen when she flies a long distance. Should she wear elastic stockings or take any other precautions so she doesn't develop a blood clot in her legs?
Ask the doctor: What accounts for wide swings in blood pressure?
My blood pressure has wide swings each day. It can go as high as 210/110, then fall to 100/50, tiring me. My doctor says I'm just a "reactive person." My diet is excellent, and I try to keep active. Could my adrenal glands have anything to do with this?
Ask the doctor: What is a good plan for serious heart failure?
My 69-year-old husband has had cardiomyopathy and diabetes for years. Lately his ankles are always swollen. At his last doctor visit, his cardiologist said his heart has leaky valves and his ejection fraction is 10%. What would be the best plan for him?
The hidden burden of high blood pressure
In addition to necessary changes in diet, activity, and the need to take medication, hypertension also takes a toll on life expectancy.
Can a hospital stay make you anemic?
Receiving hospital treatment for a heart attack may lead to anemia, due to the amount of blood taken for testing.
Don't delay if heart failure symptoms worsen
Paying attention to changes in your body can help prevent a recurrence of heart failure.
Latest thinking on a "cardioprotective" diet
Structuring a diet around types of foods rather than specific nutrients to eat or avoid is an easier way to practice healthy eating.
Heart Beat: Low-fat diets place third of three in cholesterol-lowering power
Low-fat diets are not as effective at lowering cholesterol compared to Mediterranean and portfolio diets.
Heart Beat: No need to stop aspirin, Plavix before tooth removal
Thanks to a change in dental procedure, people who take aspirin and Plavix to prevent clotting do not have to stop taking the drugs before oral surgery.
Heart Beat: Two-drug combo a good start for high BP
People who take a combination of two blood-pressure medications are more likely to get their pressure under control than those who take just one medication.
Heart Beat: Heart attack treatment happening faster
Hospitals have shortened the interval from when a person having a heart attack arrives to when angioplasty begins.
Heart Beat: Cholesterol level in middle age predicts length and quality of life
A decades-long study found that people who had a lower cholesterol reading at midlife lived an average of five years longer than their high-cholesterol counterparts.
Heart Beat: The race to high blood pressure
African-Americans with prehypertension are more likely to progress to full-fledged high blood pressure, and to do so sooner, than whites.
Further information about a breast cancer drug that may weaken the left ventricle.
Ask the doctor: Can stopping aspirin cause heart problems?
I've read that if you take aspirin every day, stopping it temporarily increases your chance of having a heart attack more than if you had never taken aspirin. Is that true? If I need to stop taking aspirin for some reason, is there a safer way to do it?
Ask the doctor: Can medications make the heart stronger, like exercise does?
When a friend of mine had a stress test, his doctor gave him a medication to make his heart work harder, instead of having him run on a treadmill. Does that mean medications could replace exercise to strengthen the heart?
Angioplasty a day after a heart attack not worth it
People who wait more than 24 hours after a heart attack to get an angioplasty do not benefit from it.
Preventing pacemaker, ICD infections now a priority
An increase in the number of infections in people receiving implanted heart devices means caregivers need to make prevention of infection their priority.
Putting heart attack, stroke triggers in perspective
Certain activities and situations can trigger heart attacks in those at risk, but researchers are showing how these risks need to be placed in the proper context.
Beta blockers: Cardiac jacks of all trades
Beta blockers are useful in treating a variety of cardiovascular conditions including angina, heart failure, and hypertension.
Healthy Eating Plate dishes out sound diet advice
The Harvard School of Public Health and Harvard Health Publications have worked together to offer a more detailed alternative to the government's MyPlate dietary recommendations.
Heart Beat: Leg workouts improve exercise capacity in people with heart failure
A specifically tailored exercise program may help people with heart failure regain strength without overworking the heart.
Heart Beat: Just-in-case electrocardiograms not recommended
An expert advisory panel reiterated its belief that healthy people who have not been diagnosed with heart disease do not need to get an electrocardiogram test.
Heart Beat: Any exercise better than none to thwart peripheral artery disease
For people with peripheral artery disease, any sort of physical activity is better than not doing anything.
Ask the doctor: Should I get more potassium from a salt substitute?
You've emphasized that people generally eat too much sodium and not enough potassium. Could I solve both problems at once by replacing my regular table salt with a substitute containing potassium?
Ask the doctor: How low should my LDL go?
I come from a long line of family members with heart disease. Right now, my HDL is 62 mg/dL [milligrams per deciliter], and my LDL is 115 mg/dL. My doctor isn't worried about my LDL, but shouldn't I shoot for an LDL level under 100 mg/dL?
Small change adds up
Trying to make major lifestyle changes to improve health is difficult. These ten small changes are easier to implement and can help you take better care of your heart.
How good is your hospital?
Organizations that track and compile data on the quality of hospitals can help prospective patients make better decisions about their care.
Bringing hospital care home
A movement called Hospital at Home seeks to provide professional-caliber meeical care at home to people who need care but do not need to be hospitalized.
Spotlight on cardiovascular drugs: Statins on the front line against heart disease
Statins are widely prescribed cardiovascular medications that lower LDL cholesterol and help fight inflammation. But they can cause side effects, so it is important to discuss their benefits and risks with a doctor.
How old are your arteries?
Two tests can be used to evaluate the health of a person's arteries, but there is also a free tool that estimates risk using answers to a few health questions.
On the horizon: Targeting nerves to heal the heart
The body's nerve system regulates heart rate and blood pressure, so researchers are looking at ways to use nerve stimulation to treat cardiovascular conditions.
On the horizon: DNA 'caps' offer target for heart drugs
Tiny parts of chromosomes called telomeres appear to have a relationship to the development of heart disease, opening a new avenue of research.
A study is planning to test the effectiveness of continuing to take post-stent medication past the recommended 12 months.
Ask the doctor: Is it safe to take ginkgo with warfarin?
I have been taking ginkgo pills for my memory for several years. I was just diagnosed with atrial fibrillation, and my doctor put me on Coumadin. Is it okay to keep taking ginkgo?
Ask the doctor: Is it okay to drink wine if you have a slow heart rate?
If you have a slow heart rate (bradycardia), is it safe to drink wine? If so, how much per day? Does alcohol affect the heart rate?
Do healthy people need an aspirin a day?
Healthy people who do not have existing cardiovascular disease are unlikely to benefit from a daily aspirin.
Slow adoption of helpful heart failure drug
Studies have shown that people with heart failure can benefit from the drug spironolactone, but concerns about possible side effects may have made some doctors reluctant to prescribe it.
Raynaud's: The big chill for fingers and toes
Raynaud's phenomenon is a sudden spasm of the blood vessels in the hands that blocks blood flow to the skin, causing pain.
Off-pump bypass surgery: Promise unfulfilled
Off-pump bypass surgery was touted as a better alternative to the traditional method, but findings show the two types yield similar results.
Heart Beat: Don't give frozen produce the cold shoulder
Eating frozen fruits and vegetables is a good way to boost the nutritional value of your diet when fresh local produce is not available.
Heart Beat: Controversial warning on Plavix and stomach-protecting medications
The FDA has warned doctors that certain stomach-protecting medications may interfere with the clot-blocking drug Plavix.
Heart Beat: A vanishing breed
Researchers claim that only 8% of Americans are healthy enough to remain free of cardiovascular disease without the assistance of a medication.
Ask the doctor: What is diastolic dysfunction?
My last echocardiogram showed mild diastolic dysfunction. What does that mean?
Ask the doctor: Should I have an angiogram to confirm a worrisome calcium score?
CT scans show my arteries are in the 89th percentile for calcium scores. My stress tests and echocardiograms are normal, so are my blood pressure and cholesterol, and I feel fine. Should I have an angiogram to confirm the calcification?
Ask the doctor: Can getting too excited while watching sports be harmful to my heart?
I like sports, and now that I am in my 60s and have had some trouble with my heart, I mainly enjoy them on television. My family sees how excited I sometimes get watching a game and they worry that it is bad for my heart. Can you tell them to relax?
Ask the doctor: Can a blocked artery cause jaw pain?
Lately when I climb the stairs or get really stressed, my jaw starts hurting. Is that just an oddity or something I should worry about?
HDL: The good, but complex, cholesterol
HDL cholesterol can be boosted by taking niacin or a fibrate, but there are possible side effects to these medications. Lifestyle changes like exercising, losing weight, and paying attention to diet should help boost HDL.
Bringing clarity to CRP testing
The hsCRP test measures the blood level of C-reactive protein, an indicator of inflammation. Whether or not the test is worthwhile depends on a person's level of cardiovascular risk and whether there is a family history of heart disease.
Protecting the heart from cancer therapy
Treatment for cancer may have unwanted effects on the heart. Depending on the type of cancer and the type of treatment, these can include irregular heart rhythm, inflammation, atherosclerosis, or an increased risk of blood clots.
Heart Beat: New prescription for some leftover drugs
The Food and Drug Administration offers guidelines on how to properly dispose of leftover medications.
Heart Beat: Cut salt for resistant hypertension
People with hypertension who are unsuccessful at controlling it with medications may benefit from a low-salt diet.
Heart Beat: It's never too late to quit smoking
Quitting smoking, even after a heart attack, will likely increase a person's longevity, and even cutting back on cigarettes is beneficial.
Heart Beat: No sailing away from heart disease
Cardiologists offer advice to people with cardiovascular conditions who are traveling on cruise ships.
Heart Beat: Lack of sex affects the heart
A lack of interest in sexual activity may be connected to cardiovascular issues.
Brief reports on a link between heart transplants and higher risk of skin cancer, the possibility that drinking coffee or tea may slightly lower the risk of diabetes, and atherosclerosis in mummies.
Ask the doctor: Is no-flush niacin as effective as other kinds of niacin?
I tried taking niacin to increase my HDL but didn't like the flushing it caused. A friend told me about no-flush niacin, which works like a charm. Why not tell your readers about it?
Ask the doctor: Does joint replacement surgery cause heart rhythm problems?
Six months after having my knee replaced, I developed an arrhythmia. I know of this happening to others, including a friend who developed an arrhythmia after having his hip replaced. Does joint replacement surgery often cause heart rhythm problems?
The American Heart Association is promoting a series of healthy lifestyle habits known as "The Simple 7" in an effort to improve the health of Americans and reduce deaths from cardiovascular disease.
Exercise stress test
The exercise stress test is used to identify potential problems with heart rate, rhythm, or blood pressure. It is used when there is cause to suspect a person has heart disease, but it does not make sense for healthy people to have the test.
A personal approach to heart failure
A self-care plan can help keep people with heart failure healthy and active. This advice will also be helpful to the people caring for those with heart failure.
Taming a killer
Heart attacks are much less deadly than they used to be, primarily due to advances in knowledge and understanding of the underlying cause of heart attacks, and to the prevalence of specialized coronary care units.
Heart Beat: Dual protection
The steps that should be taken to prevent dementia are also likely to help protect the heart and the rest of the body.
The VITAL study hopes to determine whether taking vitamin D and omega-3 fats have an effect on rates of cardiovascular disease, cancer, and other illnesses, and whether high dosages of these supplements are safe.
Brief reports on the potential risks of a certain diet drug, cutting salt intake, the effect of bronchitis and emphysema on the heart, and fish oil and longevity.
Ask the doctor: Can I take PreserVision for my eyes even though I take warfarin?
I recently began treatment for macular degeneration in one eye. My retinologist said that PreserVision might protect the other eye. But she cautioned that it contains vitamin E, which could cause a bleeding problem with Coumadin. What would you suggest?
Ask the doctor: Can allergies cause high blood pressure?
I have allergies. Could they be the reason I have high blood pressure?
Ask the doctor: Is it okay to travel to a high altitude with high blood pressure?
Some friends invited me to accompany them to Rocky Mountain National Park. I would love to go, but I have high blood pressure and worry that high altitudes are dangerous for people with this condition. Is that the case?
Chest pain: A heart attack or something else?
Chest pain is an indicator of a possible heart attack, but it may also be a symptom of another condition or problem. The type and location of the pain can help doctors determine what is causing it.
Banishing secondhand smoke
Secondhand smoke is a serious public health problem, and is almost as harmful for nonsmokers as smoking is for smokers.
A no-surgery fix for atrial fibrillation?
Catheter ablation has emerged as a potential treatment for atrial fibrillation, but about half of those who have the procedure need a follow-up, it is not known if the treatment is permanent, and there can be serious side effects.
Heart Beat: Blood clot prevention lacking in hospitals
The lack of mobility that often accompanies a hospital stay can cause a blood clot to form in a vein. Blood-thinning medication can prevent clots from forming.
Heart Beat: Walnuts and arteries
People who ate walnuts daily as an addition to their regular diets had more flexible arteries at the end of the trial period.
Heart Beat: Women and heart disease
Statistics from an American Heart Association survey reveal what women do and do not know about heart disease.
Heart Beat: Diabetes drug interferes with vitamin B12
About one third of those who take the diabetes drug metformin develop a vitamin B12 deficiency.
Heart Beat: Motorized scooters
In a study, people who used a motorized scooter to enhance their mobility experienced an increase in their levels of blood sugar.
Brief reports on a connection between shingles and stroke, the heart-protective properties of oats, and a warning about combining two HIV drugs in people with a heart rhythm problem.
The FDA has approved a heart replacement valve that is implanted via a catheter. Men with heart disease who receive androgen-deprivation therapy for prostate cancer should have their heart health monitored carefully.
Ask the doctor: What can I do to stop smoking if the "standard" treatments don't work for me?
I recently had stents placed in two coronary arteries. The doctors, of course, told me to quit smoking. I have tried to quit but just can't. Hearing over and over that I need to quit leaves me feeling depressed. Is there news that might give me some hope?
Persistence pays off in cardiac rehabilitation
The key to a successful cardiac rehabilitation program is sticking with it. Those who complete a program have increased longevity and less chance of having a heart attack or stroke.
Better ways to get your produce
For better, fresher produce this summer, consider buying your produce from a local farmers' market or planting a garden and growing your own.
Coronary artery vasospasm
Vasospasm is a sudden narrowing of an artery, caused by a chemical imbalance, that can feel like a heart attack. It can disrupt the heart's rhythm or trigger a heart attack in a person with clogged arteries or a weak heart.
Clearing clogged arteries in the neck
A blockage in one of the carotid arteries can be cleared either by endarterectomy or carotid angioplasty. The latter is less invasive, but some research is showing that this method may have a higher risk of complications.
Heart Beat: Your choice for dieting
Researchers comparing diets found that the type of diet a person follows (low-fat, low-carb, etc.) is not so important, as long as it provides the necessary nutrition and matches a person's metabolism.
Heart Beat: Going steady
Wide-ranging daily blood pressure readings could be an indicator of increased risk of a heart attack or stroke.
Heart Beat: Some leniency on heart rate control in atrial fibrillation
Controlling heart rate is one strategy for managing atrial fibrillation. Keeping the heart rate below a more lenient number of beats per minute may be just as effective as aiming for a lower number.
Heart Beat: Get help with a huge medical bill
People who believe they have been overcharged for medical care or services can enlist a company to examine their bills.
Ask the doctor: Do I really need carotid artery surgery?
I am 86 years old and have high blood pressure and diabetes. My doctor ordered tests to check my carotid arteries. They showed that one was nearly 70% blocked. My doctor said I had to have surgery right away or I would have a stroke. Is she right?
Ask the doctor: Can I fly again after having a DVT?
Last year I had a deep-vein thrombosis with a small pulmonary embolism, apparently precipitated by flying across the country without getting up and walking around. Is it safe for me to fly again? If so, what precautions would you recommend?
Ask the doctor: Is earwax connected to heart disease?
I heard somewhere that the type of earwax you have is linked to your risk of heart disease. Can that be true?
Ask the doctor: Is CholestOff safe to take for someone who has had breast cancer?
I have been taking CholestOff for a few years to lower my cholesterol. Does CholestOff have any long-term side effects that might be a problem for breast cancer survivors like me?
Ask the doctor: Is my LDL too low?
I am a 59-year-old man. The results of my latest blood test showed that my LDL cholesterol was 67, which was flagged as low. (I do not take any cholesterol-lowering drugs.) Should I be worried, or do anything to raise my LDL?
Heat can beat the heart
Hot, humid weather can overwork the heart, which can pose risks for people with certain conditions, or those who take beta blockers or diuretics.
Eating can cause low blood pressure
Postprandial hypotension, low blood pressure that occurs after eating, can cause dizziness, chest pain, nausea, or other issues, particularly in the elderly.
Potential salt assault
The average person consumes more salt each day than the body requires, most of it from "hidden" salt in prepared and packaged foods. The FDA may ask food companies to voluntarily reduce the salt content of their products over the coming decade.
When and how to treat a leaky mitral valve
If the mitral valve in the heart becomes damaged it can leak, causing blood to flow backward and overwork the heart. A leaky valve can be surgically replaced, but in some situations repairing the valve is more effective than surgery.
Heart Beat: Tape of meeting eases jitters before bypass
Researchers found that when people having conversations with their doctors about impending bypass surgery were given a recording of the consultation, they had a better understanding of the procedure.
Heart Beat: Generic ARBs are coming
The FDA has approved the sale of a generic version of the angiotensin-receptor blocker medication losartan, and generic versions of two other ARBs may soon follow.
Brief reports on heart failure and avoiding rehospitalization, the dangerous combination of prehypertension and prediabetes, and a warning about eating Dead Sea salt.
Ask the doctor: Are there noninvasive alternatives to a nuclear stress test?
My doctor wants me to have a nuclear stress test to check my arteries for any blockages. What noninvasive test would give as much information as a nuclear stress test? I have had many scans, so I would like to limit my exposure to radiation if possible.
Ask the doctor: Does prednisone increase blood pressure?
I have rheumatoid arthritis, and my doctor wants me to take prednisone for it. Will this drug be bad for my blood pressure, which is already high?
Ask the doctor: What can I do to protect my heart if my body no longer makes testosterone?
I had an orchiectomy for prostate cancer. Not long afterward, I had two cardiac stents implanted. I still have some angina and shortness of breath. I started Ranexa, which helps my angina. Do you have any suggestions for protecting my heart?
Shining a light on thoracic aortic disease
A thoracic aortic aneurysm can be small and stable, or it can tear or rupture. People with certain genetic conditions, and those who have a relative who has had this condition, are at higher risk and should be tested.
Red meat: Avoid the processed stuff
Eating red meat regularly may not be as bad for us as was once believed, but frequent consumption of processed meats like hot dogs, cold cuts, and bacon is still unhealthy.
Diastolic heart failure
In diastolic heart failure, the left ventricle becomes thick and stiff. The symptoms are the same as those for systolic heart failure, but researchers are still searching for the best treatment strategies.
Heart Beat: Stents make later surgery riskier than usual
Getting a stent implanted within six weeks before having another, noncardiac surgery carries a much higher risk of having a heart attack or dying.
Heart Beat: Converting blood sugar to HbA1c
People with diabetes who take blood sugar readings at home now have a way to convert that information into a hemoglobin A1c value, which indicates a person's average daily blood sugar.
Heart Beat: Steroids and the heart
Among the side effects of steroid use, one serious consequence is a weakening of the heart's left ventricle.
Heart Beat: A sweet, nutty plan for better cholesterol, blood pressure
Eating moderate amounts of nuts and chocolate may being heart-protective benefits in the form of lower LDL cholesterol and lower blood pressure, respectively.
Heart Beat: Exercise no trigger for defibrillator shocks
Having a defibrillator implanted does not preclude exercising.
Brief reports on an interaction between warfarin and a particular antibiotic prescribed for urinary tract infections, and outdoor exercise as a mood booster.
Ask the doctor: How could I have a heart attack after a normal exercise test?
I had a nuclear exercise test last fall, and it was perfectly normal. Imagine my surprise this spring when I developed burning chest pain that turned out to be a heart attack on the bottom part of my heart. Did the doctors mess up the reading of my test?
Ask the doctor: What are the alternatives to a statin for lowering cholesterol?
I have tried all of the statin drugs to lower my cholesterol, but each one has caused severe muscle pain. Are there any non-statin medications I could try using to lower my cholesterol?
Heart attacks come in all kinds, sizes
The term "heart attack" encompasses a number of conditions that vary in severity and treatment approach.
Diagnosing sleep apnea at home
Diagnosing sleep apnea typically requires an overnight stay in a hospital or sleep lab, but portable monitoring equipment may make diagnosis easier for some people.
Stand up for your heart
Research examining the dangers of inactivity suggests that those who are not currently physically active are likely to benefit from even a small amount of activity or exercise.
New thinking on saturated fat
The evolving understanding of the different types of fats in foods has changed the perception of saturated fat. Eaten in moderation, it is a useful part of the diet and is unlikely to affect cardiovascular health.
Heart Beat: Aspirin and diabetes
Guidelines for whether or not people with diabetes should take a daily aspirin to prevent heart attacks have been revised based on risk.
Heart Beat: Faith in medications fades
The challenges of maintaining a medication regimen for a long period of time are compounded by diminishing belief in their effectiveness
Brief reports on migraines and stroke risk, HDL's potential role in lowering cancer risk, unhealthy food ads on TV, and the benefits of defibrillators in public places.
Reader to Reader
Readers offer suggestions and strategies for quitting smoking.
Ask the doctor: How often does a leaky mitral valve need to be checked?
Your article on mitral valve surgery didn't mention how often someone like me - with mild regurgitation from a leaky mitral valve but no symptoms - should have his or her valve checked. Are there any standards for this?
Ask the doctor: My heart is better - should I stop taking amiodarone?
After a heart attack my doctor put me on amiodarone. Three years ago, I started cutting back on it because of side effects. My latest electrocardiogram showed no signs of tachycardia, and my doctor wants me to stop taking amiodarone. What should I do?
Beating high blood pressure with food
A healthy diet that includes poultry, fish, whole grains, vegetables and fruits, nuts, legumes, low-fat dairy products, and unsaturated fats can help control high blood pressure.
Standing guard over blood vessel health
The layer of endothelial cells that lines blood vessels helps protect them and keep them functioning properly, but smoking, poor diet, and other risk factors can damage the endothelium, opening the door to heart disease.
Choosing the right replacement heart valve
If replacing a heart valve becomes necessary, the decision is mainly a choice between a mechanical valve, which requires the recipient to take warfarin to prevent clotting, or a tissue valve, which will not last as long as a mechanical one.
New heart rate estimate for women
A revised formula for calculating peak heart rate in women can help those who may want to determine a target heart rate as a guideline for exercise.
Heart Beat: Geography influences treatment of clogged carotid arteries
Researchers found that the rates of artery-clearing procedures varied significantly among different regions in the United States.
Heart Beat: Treat yourself to better blood pressure
Daily self-monitoring of blood pressure readings can help keep pressure from drifting upward.
Brief reports on anxiety disorders and increased risk of heart disease, the decline in trans fat use in fast food, and the health benefits of bicycling.
Ask the doctor: Am I exercising too much?
I am 80 years old. 40 years ago I had a heart attack. I stopped smoking but remained very active. My blood pressure, with the help of medications, is around 125/70. My physician thinks I am pushing too hard and has urged me to take it easier. Is he right?
Ask the doctor: Are raw oats better than cooked oats?
My family has squabbled about oats for some time. Some members say that to get the biggest health benefit from oats you need to eat them raw, moistened with water. Others say they should be cooked. Does cooking take something beneficial out of oats?
Calcium supplements and heart attack
As we age, bones lose calcium and arteries accumulate calcium, which causes them to stiffen. But it's still important to get enough calcium, which works with vitamin D in the body to keep bones strong.
Light and social smoking carry cardiovascular risks
Almost 25% of smokers smoke only a few cigarettes per day, or smoke only once in a while, but they are still exposing themselves to the same health risks as heavier smokers.
Resveratrol for a longer life - if you're a yeast
Many claims have been made about the ability of resveratrol to prevent heart disease and other illnesses, but the little research in humans has not tested for long-term health, and there are many unanswered questions about side effects.
Yoga could be good for heart disease
Yoga's combination of gentle exercise, stretching, focus on deep breathing, and the resulting greater mindfulness may be of particular benefit to people living with cardiovascular disease.
Heart Beat: Bad reaction to a medication? Let your voice be heard
The Food and Drug Administration has established a toll-free number that consumers can use to report adverse side effects from medications (both prescription and over-the-counter) and medical devices.
Heart Beat: Antidepressant little help in heart failure
A trial of the antidepressant sertraline in people living with heart failure did not ease their depression.
Heart Beat: Atrial fibrillation? Don't blame caffeine
Researchers have concluded that caffeine does not affect the development of atrial fibrillation.
Ask the doctor: Does pomelo juice affect drugs the same way grapefruit juice does?
I avoid grapefruit juice because my doctor says it affects how my body handles the Lipitor I take for my cholesterol. Should I also stay away from pomelo?
Ask the doctor: Do I need an MRI scan of my heart?
I am an 84-year-old man with atrial fibrillation, mild heart failure, and high blood pressure. My doctor had me wear a Holter monitor and get a SPECT scan. Now he wants me to have a cardiac MRI. What info would this test give that he doesn't already have?
Ask the doctor: Is there a safe way to stop taking warfarin before surgery?
I'm a 79-year-old man with atrial fibrillation on Pacerone. I also take warfarin and aspirin. I plan to have a tooth pulled next month and wonder if it is safe to go off the blood thinners. How are these medications handled when serious surgery is needed?
Ask the doctor: Is wood smoke a problem for my heart?
Many people in my neighborhood heat their homes with wood stoves. The smoke really bothers me. Does what's coming out of their chimneys affect my heart?
What can angioplasty do for you?
If you are having a heart attack, angioplasty will open a blocked artery and hopefully limit muscle damage, but the procedure does nothing to stop the spread of atherosclerosis or reduce the risk of a future heart attack.
Protein "package" matters in a low-carb diet
Evidence from ongoing health studies suggests that the source of protein in a low-carb diet influences the risk of heart disease, and that getting more protein from plant sources is better.
Refining the rules for abdominal aneurysm testing
A scoring system may help identify people who should be screened for an abdominal aortic aneurysm.
Coping with what you can't change
A number of risk factors for heart disease cannot be changed or controlled. Awareness of these factors can act as encouragement to pay attention to what can be controlled, such as diet and exercise.
Heart Beat: Alcohol: Moderation matters, especially with high blood pressure
Binge drinkers are more likely to have heart ailments, which may be of particular concern for those who also have high blood pressure.
Heart Beat: Snow and stents a chilly mix
Snow shoveling is a known trigger for heart attacks, and people with stents are at additional risk.
Heart Beat: Ornish, Pritikin get Medicare okay for cardiac rehab
The Pritikin and Ornish diets are now included in Medicare coverage for intensive cardiac rehabilitation, though only in certain locations.
Heart Beat: Heart, arteries thrive with more potassium
Boosting daily potassium intake by eating more fruits, vegetables, and low-fat dairy foods is likely to reduce the risk of a heart attack or stroke.
Further information from studies on an aortic valve repair procedure, the benefits of eating whole-grain foods, the heart risks of testosterone therapy, and underactive thyroid.
Ask the doctor: Do I need to get a flu vaccination this year?
Now that the fuss over H1N1 swine flu has died down, do I need to get vaccinated this year?
Ask the doctor: Do angiotensin-receptor blockers cause cancer?
I read that angiotensin-receptor blockers cause cancer. I take one (Diovan) for my blood pressure. Should I stop?
The editors of the Harvard Heart Letter introducean issue focused on acquiring new knowledge in order to improve your health.
Nine tips for a healthier 2009
Start the year with these tips for heart care and healthier living. Suggestions include learning CPR, reducing stress, establishing an advance care directive and choosing a health care proxy.
The results of a large trial suggest that people with LDL cholesterol in the normal range but with a high C-reactive protein level may benefit from taking a statin. This may lead to increased use of the CRP to test for heart disease.
Make your health information personal
Gathering all your health records and vital information in one place can streamline your care and help doctors in the event of an emergency. Several web sites now offer ways to simplify the online storage of health information.
Changing picture of atherosclerosis
The medical view of atherosclerosis is changing from the traditional one of arteries blocked by plaque to a more encompassing one, with inflammation as the main cause and an emphasis on stopping it before it even starts.
Navigating the ocean of health information
There is plenty of information available online to help you learn about cardiovascular health, but not all of it is unbiased or accurate.
Ask the doctor: Is it possible to reverse coronary artery disease?
I have coronary artery disease. Is this something I can have cured or get rid of, or is keeping it from getting worse the best I can do?
Ask the doctor: Are big surges in blood pressure dangerous?
When I am under great stress, my blood pressure sometimes shoots up to 200/120 but then quickly goes down to 120/80 or lower and stays there. One doctor told me that spikes like these are normal. Another told me this isn't healthy. Who is right?
Ask the doctor: Is bundle branch block serious?
I had an electrocardiogram in preparation for minor surgery. My doctor told me it showed that I have right bundle branch block. Neither he nor my cardiologist are worried about it, but I am. Is this serious?
Bypass results vary by hospital
Researchers examining deaths during or soon after bypass surgery found that the surgeons and hospitals that did the most surgeries had the lowest death rates.
Spotlight on heart tests: C-reactive protein testing comes of age
A high-sensitivity version of the C-reactive protein test can help detect inflammation caused by atherosclerosis in people at moderate risk of heart disease.
Generic heart drugs as good as brand names
Analysis of clinical trials showed that generic versions of cardiovascular medications are as effective as their name-brand counterparts.
Two-way street between depression and heart disease
Heart disease and depression are often closely linked. Depressed people are more likely to develop heart disease, and those living with heart disease are more likely to become depressed. The main avenues of treatment are medication, therapy, and exercise.
Heart beat: Preeclampsia poses later heart risk
Women who experience preeclampsia during a pregnancy may be at higher risk of a heart attack, stroke, or other heart disease later in life.
Heart beat: Statins, aspirin affect prostate cancer test
Two studies found that use of a statin or daily low-dose aspirin may artificially lower the reading of a prostate-specific antigen (PSA) test.
Heart beat: Trial gives nod to home warfarin monitoring
Home monitoring devices for use by people taking warfarin compared favorably to regular blood tests done at a medical facility.
Heart beat: When success leads to failure
More people are surviving heart attacks and receiving better care afterward, which has led to an increase in the number of people living with heart failure.
Heart beat: C+E get an F for heart protection
Another large study adds to the evidence that taking vitamin C and vitamin E to protect against heart disease is not effective.
Heart beat: Beware cardiac arrest after heart attack
In the first month after surviving a heart attack, people are four times more likely to have a cardiac arrest than in the following months.
Using a special garment to squeeze the legs in time with the heart can ease chest pain. The inflammation that causes rheumatoid arthritis affects the heart as well as the joints.
Ask the doctor: What does an enlarged heart signify?
My doctor told me I have an enlarged heart. What is this? What causes it and what does it mean for my health?
Ask the doctor: Could my statin or exercise be affecting my kidneys?
Can muscle damage from a statin, or from strenuous exercise, elevate creatinine even after I stopped taking the statin and exercising but continue to take Zetia and Diovan HCT?
The flap over mitral valve prolapse
Mitral valve prolapse is a bulging of the valve between the left atrium and left ventricle. Most people with the condition need no treatment and can expect to have a normal life span, though in certain cases the valve can start to leak.
Snapshot of the American diet: Foods out of balance
Americans eat too much fat and refined sugar, and not enough vegetables, fruits, whole grains, and fish. Making some conscious substitutions and food choices can promote heart health.
Creating order from chaos: Taming atrial fibrillation
Atrial fibrillation occurs when the heart receives an overload of signals telling it to beat, causing an irregular rhythm. It can be caused by a number of conditions including high blood pressure, heart failure, a viral infection, or stress.
Heart Beat: Exercise benefits clogged leg arteries
People with peripheral artery disease will most likely benefit from an exercise regimen, regardless of whether or not they are experiencing the leg pain that frequently accompanies the condition.
Heart Beat: Gasping shouldn't delay CPR
If a person who is having a heart attack is not breathing but occasionally gasps for air, CPR should still be administered. In the first few minutes after an attack, it is more important to focus on chest compressions.
Brief updates on a possible link between too little sleep and heart disease, higher blood pressure in winter, and the danger of fat around the heart.
Ask the doctor: Can I have a catheter procedure to stop atrial fibrillation?
My doctor told me I should think about having a procedure something like angioplasty to stop my atrial fibrillation. Can you tell me more?
Ask the doctor: Can I exercise even though my valves are leaking a little bit?
I am 78 years old. An echocardiogram showed a leak in my mitral valve. A follow-up test showed some leakage in my tricuspid valve. I like to exercise, but don't want to make these problems worse. Is it okay for me to walk on a treadmill or lift weights?
Ask the doctor: Do statins affect blood pressure?
I have been arguing with a friend about whether the statin drugs lower blood pressure. Do they, or don't they?
Radiation in medicine: A double-edged sword
Tests such as CT scans have become crucial tools in the diagnosis and treatment of many diseases and conditions, but the radiation exposure from these tests may lead to an increased risk of developing cancer.
Women's hearts need extra attention
Heart disease, once thought to be a man's disease, is now understood to affect women and men equally, but there are still disparities in the diagnosis and treatment of heart disease in women.
Potassium and sodium out of balance
The body needs the combination of potassium and sodium to produce energy and regulate kidney function, but most people get far too much sodium and not enough potassium.
Heart Beat: Binge drinking and stroke
A study from Finland shows an association between binge drinking and an increased risk of having a stroke.
Heart Beat: Osteoporosis drugs not linked to atrial fibrillation
An FDA review of trials involving bisphosphonate drugs used to treat osteoporosis found no link between their use and any increased risk of atrial fibrillation.
Brief updates on the benefit of the Maze procedure, St. John's wort's interference with statins, the safety of angioplasty performed through the radial artery, and the cardiac risks of newer antipsychotic drugs.
Ask the doctor: Will taking arginine and citrulline protect my arteries?
My husband is taking arginine and citrulline supplements because he read that they will protect his heart and arteries. Should I try these supplements, too, or is this a waste of money?
Ask the doctor: Does the length of the ST segment on an electrocardiogram matter?
I have an electrocardiogram as part of my yearly checkup. After the last one, my doctor mentioned that my ST segment was longer this year than it was last year. He recommended that I have a stress test to check this out. I passed with flying colors. When I asked the cardiologist who did the stress test about the ST segment, he said the length isn?t really important, that the height and shape are what matter. Can you explain?
Ask the doctor: Does narrowing of the aortic valve get better on its own?
Does mild aortic stenosis (causing a mild heart murmur) ever correct itself without medication or surgery?
Trial renews surgery vs. stent debate
People with artery disease may have a choice between bypass surgery and angioplasty, depending on the circumstances of one's condition and other factors such as whether or not a person can take the medication clopidogrel.
Take the plunge for your heart
Swimming is exercise that benefits the heart, lungs, and blood vessels, without the joint stress and potential pain caused by running.
On the alert for deep-vein blood clots
Deep-vein thrombosis is a clot that forms in a leg or arm vein. Sometimes a piece of the clot can break away and travel through the bloodstream. If the clot lodges in a lung, it can be fatal.
No need to avoid healthy omega-6 fats
Omega-6 fats were once criticized as unhealthy, but researchers for the American Heart Association have concluded that they are in fact beneficial to the heart.
Heart Beat: Mindfulness helps ease heart failure
Mindfulness is a form of meditation that encourages greater awareness of one's surroundings and experiences. Volunteers with heart failure who participated in a mindfulness study reported lower levels of anxiety and feelings of better overall health.
Heart Beat: Say "nuts" to chips
Nuts are considered a healthy food when eaten in moderate amounts, but chewing them thoroughly seems to release more of the nut's nutritional value.
Heart Beat: A weight loss "secret": Calories matter
A comparison of several different diet strategies found that the choice of diet is less important than cutting daily calorie intake and exercising enough to burn extra calories.
Ask the doctor: How is atrial flutter different from atrial fibrillation?
What are the differences between atrial flutter and atrial fibrillation?
Ask the doctor: Is donating blood good for the heart?
Are there any cardiovascular benefits to donating blood? Is it like getting an oil change for your car, with the donation getting rid of old blood cells and the body making new ones?
Defining a moderate-intensity workout
A researcher has determined that the recommended "moderate intensity" exercise level can be accomplished by walking at least 100 steps per minute. An inexpensive pedometer can help you determine your walking speed.
Exercise equals angioplasty for leg pain
Angioplasty can be used as a treatment for intermittent claudication in leg veins. While the results are quicker, equivalent benefits can be achieved with an exercise program combined with medication, without the risk and recovery period of surgery.
Treat "mini-strokes" as an emergency, not a gentle warning
A transient ischemic attack is similar to a stroke. While it may be over quickly, it must be treated as a serious medical condition. Prompt attention and treatment may prevent the subsequent occurrence of a full-fledged stroke.
New guidelines refine aspirin prescription
Taking a daily aspirin can help prevent heart attacks in men and strokes in women, but not everyone who takes aspirin should do so, because aspirin may increase the risk of stomach bleeding.
Heart Beat: Atrial fibrillation and blood pressure
People with atrial fibrillation benefit from aggressive blood pressure control, resulting in fewer deaths from stroke and other cardiovascular causes.
Heart Beat: Billions for heart care
In 2006 Americans spent more than $190 billion on heart-related health care issues, nearly one-fifth of total health care costs that year.
Brief reports on giving proper attention to high triglycerides, undermining cardiovascular drug therapy with unhealthy lifestyle choices, and an apparent bonus from taking a statin: reduced risk of blood clots.
Ask the doctor: How did my blood pressure suddenly become normal?
I was taking diltiazem and Atacand, which gave me an average blood pressure of 110/65. Recently while in the hospital, my blood pressure got so low I was told to stop taking these medications. My blood pressure has remained at 105/65. How can this be?
Ask the doctor: Why aren't prevention efforts stopping an increase in heart disease?
Why is heart disease still on the rise despite the incredible increase in the number of people taking cholesterol-lowering drugs and the more than 30 years of "low-fat" propaganda?
Ask the doctor: Do I need to take precautions if I stop taking warfarin before a colonoscopy?
I am due to have a colonoscopy. My cardiologist told me that I will need to stop taking Coumadin, which I take for atrial fibrillation, a few days before the procedure and get some injections. Is that really necessary?
Ask the doctor: Will a memory-boosting supplement interfere with my heart medications?
Since having a heart attack, I have been taking lisinopril, Zocor, Plavix, aspirin, fish oil, calcium, and a number of other vitamins and supplements. I am thinking of taking a brain booster called Procera AVH. Will it interfere with my heart medications?
Regenerating the heart
Researchers from Sweden have demonstrated that the heart is capable of growing new muscle cells, though this process occurs very slowly.
Redefining myocardial infarction
The definition of a myocardial infarction has been revised to reflect the significance of a protein called troponin, which is released into the bloodstream when heart muscle damaged.
Advanced pacemaker gets the heart in sync
One-third of people with heart failure have ventricles that beat out of sync. A biventricular pacemaker sends electrical signals to the ventricles to keep them working together, making everyday activities easier.
Heart infection can pose a medical mystery
Myocarditis is an inflammation of the middle layer of the heart. It may be caused by a virus, allergic reaction, or exposure to a toxin. Diagnosis is difficult because symptoms are not specific and may suggest other causes.
Heart Beat: A single pill for prevention?
A "polypill" containing multiple blood pressure medications, a statin, and aspirin may be a simple, workable approach to help prevent heart disease.
Heart Beat: Aspirin gets a backup against atrial fibrillation
Aspirin plus warfarin is an effective defense against stroke-causing blood clots, but many people cannot take warfarin. A study found that aspirin plus clopidogrel (Plavix) was also effective.
Heart Beat: Summer: A good season for cholesterol
Levels of LDL cholesterol drop a few points in summer, while HDL rises slightly.
Heart Beat: Black tea and blood pressure
Black tea may lower blood pressure slightly, but the effect is small.
Heart Beat: The biggest loser
Results of a trial showed that exercise and weight loss combined with the DASH diet for blood pressure control achieved a greater reduction in systolic blood pressure than the diet alone.
Heart Beat: Traffic, anger strain the heart
A study of German heart attack survivors found a slight correlation between being stuck in traffic and risk of a heart attack.
On the horizon
A brief summary of research with potential future applications: closing off the left atrial appendage to prevent clots, stimulating the brains of stroke victims with laser beams, and a new type of stent that dissolves over time.
Ask the doctor: Why is peanut butter "healthy" if it has saturated fat?
I keep reading that peanut butter is a healthy food. But it contains saturated fat and has more sodium than potassium. That doesn't sound healthy to me.
Ask the doctor: Is the term "coronary heart disease" redundant?
I always thought that coronary and heart meant pretty much the same thing. If that's so, isn't "coronary heart disease" redundant?
Stomach-protecting drug could block Plavix
Many people who take aspirin and clopidogrel (Plavix) to prevent blood clots also take a proton-pump inhibitor (PPI) to ease the gastrointestinal bleeding the other medications can cause. But a study found that PPIs can limit the effectiveness of Plavix.
Hole in the heart opens questions
Stroke victims are more likely to have a patent foramen ovale, a hole between the heart's left and right atria, but closing the hole may not prevent the occurrence of another stroke.
13 ways to add fruits and vegetables to your diet
Adding fruits and vegetables to your diet is a simple way to eat more healthfully. Here are some suggestions to make healthy eating more fun and interesting.
When the lights suddenly go out
Fainting occurs when blood flow to the brain is blocked or interrupted. An incident of fainting should be reported to a doctor, because if it was caused by a problem in the heart, it may lead to more serious problems.
Heart Beat: New name for TIA?
Readers suggest alternative terms for a transient ischemic attack.
Heart Beat: Preventable threats to survival
Everyone wants to live longer, and there are many preventable causes of death that can be avoided with proper health habits.
Heart Beat: Extending the time for stroke treatment
When someone has a stroke, immediate treatment is essential. The American Stroke Association says a clot-destroying drug called tPA may work for up to four and a half hours after the onset of a stroke, but should be given within an hour if possible.
Brief reports on CPR and an improved cardiac arrest survival rate, chewable aspirin as a rapid heart attack aid, and the effect of lack of sleep on blood pressure.
Ask the doctor: Are isometric exercises safe for the heart?
Long ago I was told that isometric exercises, like weight lifting, shouldn't be done by anyone with a heart condition. Is that still the prevailing wisdom?
Ask the doctor: What are silent heart attacks?
What are silent heart attacks? How are they different from regular ones? If they are silent, how does anyone know about them?
Ask the doctor: Why do I get chest pain when I don't warm up before exercising?
I work out regularly, but suffer from exercise-induced angina. If I start exercising without warming up, my chest starts to feel "tight" quickly. If I warm up properly, I can walk for several miles at a pretty fast pace without any pain. Can you explain?
Ask the doctor: How can you tell when a leaky mitral valve needs to be fixed?
I am an 82-year-old man with borderline leakage in my mitral valve. What symptoms or tests would help me and others recognize when it is time to consider having the valve fixed?
Pain relief balancing act
Frequent use of pain relievers can irritate the stomach, digestive tract, or possibly the heart and blood vessels. For people with heart disease, a study suggests that regular use of naproxen will not harm the heart.
Walk often, walk far
A cardiac rehabilitation program can help people with heart disease regain strength and stamina. An exercise program that emphasizes frequent walking over a more intense workout can result in greater loss of weight and body fat.
Heart Beat: Treating sleep apnea may pay off for the heart
Sleep apnea can damage the heart or cause rhythm problems. Treating the apnea may stop or reverse this damage.
Heart Beat: Double treatment for heart attack
People who receive a clot-destroying drug after a heart attack may also benefit from a subsequent angioplasty.
Heart Beat: Anxious about angina
Anxiety and depression greatly increase the probability of developing angina.
Brief reports on the use of compression stockings by stroke survivors, and the benefit of adding a second blood pressure medication.
Special section: Cardiovascular connections: Two-way street between heart and health
This special section highlights the relationships between the heart and many other parts of the body.
Special section: Cardiovascular connections: Body fat: The good, the bad, the...
Abdominal fat cells are responsible for many cardiovascular problems.
Special section: Cardiovascular connections: Two-way street between head, heart
Stress, anxiety, and negative emotions can bring on or worsen heart disease, and cardiovascular problems can contribute to dementia.
Special section: Cardiovascular connections: Psoriasis is more than skin deep
Psoriasis, which is not a skin disease but an immune disorder, may be linked to heart disease, possibly through inflammation.
Special section: Cardiovascular connections: Testosterone, sex, and the heart
Low levels of testosterone have been linked to many health problems in men.
Special section: Cardiovascular connections: The ovarian connection
Hormones produced by the ovaries are beneficial, but taking a hormone medication increases certain health risks.
Special section: Cardiovascular connections: Skeleton key
Taking measures to protect the heart, such as exercising and eating a healthy diet, can also help prevent osteoporosis.
Special section: Cardiovascular connections: Odd associations
A number of odd connections exist between body parts and the cardiovascular system.
Ask the doctor: Does exercise help damaged heart muscle?
After my heart attack, my doctor told me that damaged heart muscle cannot be replaced. If this is true, why am I walking on a treadmill five days a week? Is this helping repair the damage or strengthen what's left?
Ask the doctor: Are there radiation-free tests for checking my arteries?
Are there any noninvasive, radiation-free tests that can give the same information about possible blockages in my coronary arteries as a nuclear stress test? I've had so many CT scans for other conditions that I'd prefer to go non-nuclear for a while.
11 foods that lower cholesterol
Certain foods, such as beans, oats and whole grains, fatty fish, and fruits and vegetables that are high in fiber, can lower "bad" LDL cholesterol.
After a heart attack
Carefully following the discharge instructions after a heart attack, including participating in a cardiac rehabilitation program, provides a much better chance of a full recovery and preventing another attack.
Atrial fibrillation, angioplasty drugs approved
Two new medications have been approved by the Food and Drug Administration, one for people with atrial fibrillation and one that works to fight the formation of clots.
Cautious confirmation for easier aneurysm repair
An abdominal aortic aneurysm can be dangerous if it grows beyond a certain size. A newer, less invasive procedure can correct the problem with less risk than open surgery.
Heart Beat: Big chill for cardiac arrest
Rapid cooling of cardiac arrest victims increases their chances of eventual survival by reducing the extent of damage caused by lack of oxygen to the brain when the heart stops.
Heart Beat: Trial Watch
A new study is comparing methods of treating leg pain caused by peripheral artery disease.
Ask the doctor: Does heart rate affect blood pressure?
When doctors interpret a blood pressure reading, should they also consider the heart rate? My pressure is often higher when my heart rate is close to its usual resting rate and lower when my heart is beating faster than that.
Ask the doctor: Should I wait to have my aortic valve replaced?
I'm an 85-year-old man with aortic valve stenosis, coronary artery disease, and atrial fibrillation. My doctor said I should wait until I experience signs of heart failure before having my aortic valve replaced. Shouldn't I get it done sooner?
Ask the doctor: Are some blood vessels more prone to blockages than others?
Are the coronary arteries more prone to developing blockages than arteries elsewhere in the body? When arteries from other parts of the body are used in bypass surgery, does their tendency to become blocked change?
Ask the doctor: Is vinegar good for the arteries?
I've heard that apple cider vinegar can clean out the arteries. Is there any truth to that?
Sporadic high blood pressure deserves attention
Monitoring your blood pressure by taking daily readings at home over a period of time can provide a more accurate sense of your true pressure than a reading in the doctor's office, which may be artificially high or low.
Exercise prescription for diabetes
The American Heart Association recommends that people with type 2 diabetes should undertake an exercise program combining aerobic exercise with strength training. This strategy is best for protecting the heart and improving muscles' response to insulin.
6 steps to safer use of triple therapy
People taking the "triple therapy" combination of aspirin, Plavix, and warfarin are at increased risk of bleeding. A panel of experts has recommended a set of guidelines intended to make taking this drug combination safe and effective.
Using music to tune the heart
Researchers are exploring how the use of music therapy may aid in the treatment and recovery of cardiovascular patients.
Heart Beat: Setting standards for pacemaker and ICD lead extraction
The Heart Rhythm Society has published guidelines for the procedure to remove broken, damaged, or worn out pacemaker or ICD leads.
Heart Beat: Heart failure tough on B vitamins
People with heart failure are more likely to have a B vitamin deficiency, possibly due to decreased appetite, faster metabolism, and medications that may remove certain nutrients from the body.
Heart Beat: Statins before vascular surgery
A Dutch study that recommends starting to take a statin medication prior to having vascular surgery supports existing advice from the American Heart Association.
Heart Beat: Go Mediterranean for the brain and heart
A pair of studies adds to the evidence that a Mediterranean-style diet not only benefits the heart, but can also help counter age-related decline in brain function.
Heart Beat: Blood pressure reading affected by eating
Eating before having a blood pressure test can artificially lower the reading by a few points.
Brief reports on a potential alternative to warfarin, the added harm of cholesterol in fried foods, reducing stroke risk, and comparing higher doses of a statin with a combination drug.
Ask the doctor: Is it worrisome to hear a pulse in my ear?
One morning last week I woke up hearing my heartbeat in my left ear. I hear it most clearly when I am in bed or sitting quietly. My health is good, and I was told after a recent cardiac workup that my heart was "perfect." Should I be worried?
Another reason to get a flu shot: your heart
Any infection, including the flu, can stress the heart and lead to higher blood pressure, breathing problems, increased heart rate, or inflammation. Getting a flu shot can help protect the heart.
Vitamin D: a bright spot in nutrition research
Many older people do not get enough vitamin D, which may contribute to coronary artery disease and high blood pressure. Exposure to sunlight is not a reliable source of vitamin D during the winter months, so taking a supplement is recommended.
Blood pressure: How low should you go?
Keeping blood pressure low is important to overall health, but for people with coronary artery disease, lowering diastolic blood pressure too much could increase the risk of a heart attack.
Heart Beat: Peripheral artery disease and stroke
People who have had a transient ischemic attack or a stroke should consider getting an ankle-brachial index test to check for peripheral artery disease.
Heart Beat: Shellfish for the heart?
Shellfish may not offer the same protection against heart disease as finned fish, but it is still a healthier alternative to red meat.
Brief reports on the beneficial effect of weight loss on the heart, eating a Mediterranean-style diet to control blood sugar, and an attempt to compare angioplasty and exercise as treatments for angina.
Ask the doctor: Should I double up on aspirin if I think I am having a heart attack?
I've heard you should take an aspirin if you think you are having a heart attack. I already take aspirin (325 mg) every day. Should I still take an aspirin if I feel a heart attack coming on?
Ask the doctor: How can I keep my coronary arteries from going into painful spasms?
What can be done for endothelial dysfunction that causes coronary artery spasms and requires nitroglycerin at least four times a day?
Ask the doctor: Can I take red yeast rice instead of a statin to lower my cholesterol?
What is the story on using red yeast rice to lower cholesterol? You have warned readers against using it in the past, but I heard about a new study that shows it works. Are you ready to admit you are wrong on this one?
Gene tests for some, not all
Certain inherited genetic conditions increase the risk of cardiovascular disease, so having a genetic test may show whether a person is at risk for heart disease, especially if a family member has one of the conditions.
When an artery becomes narrowed by plaque, the body responds by growing and strengthening nearby blood vessels to move blood around the narrowing, possibly preventing heart disease. Vigorous exercise can stimulate this blood vessel growth.
Mechanical assist for heart failure
For people with severe heart failure, a pumping device called a left ventricular assist may prolong life for those who are not healthy enough for transplant surgery, or who face a lenghty wait on the transplant list.
Repairing the heart one cell at a time
Researchers are hoping to one day be able to use stem cells to repair heart muscle damaged by a heart attack, but so far the tests have not succeeded.
Heart Beat: Look alive - it's Monday!
A public service campaign aims to encourage people to make healthier lifestyle choices by thinking about them each week on Monday.
Heart Beat: New vitamin helps lower cholesterol
A new multivitamin includes phytosterols, which help the body block the absorption of cholesterol. Phytosterols occur naturally in plants, but in small quantities, making it difficult to eat enough from foods to obtain their benefits.
Heart Beat: Chilling out
In an emergency cardiac arrest situation, rapid cooling of the body can improve a person's chance of survival and limit the possibility of brain damage.
Bystanders using defibrillators on cardiac arrest victims double their chances of survival. A new type of defibrillator provides audio guidance to help bystanders use the device properly.
Ask the doctor: Can I take a diuretic?
I had to take hydrochlorothiazide and Lasix together. After an electrolyte imbalance, my doctors told me never to take these medications again. I recently had my aortic valve replaced, and am retaining water. Are there any diuretics I can safely take?
Ask the doctor: Can a massage cause a stroke?
I have a deep muscle massage every month or so. After my sister had a stroke, I started worrying that my massages could loosen any plaque in my carotid arteries, which could make me have a stroke. Could this happen?
Mini strokes are a maxi problem
Transient ischemic attacks (TIAs), while seemingly insignificant, often lead to strokes within a short time span. If you experience a TIA or have symptoms that suggest one, take it seriously and seek treatment right away.
Angiotensin inhibitor or blocker?
For people who need medication to lower blood pressure, there are two types of drugs available. ACE inhibitors have been available longer than ARBs. They are comparably effective, though several ACE inhibitors are available in generic form.
Surgery or angioplasty for opening a clogged neck artery?
Those with a narrowed carotid artery have a choice between two procedures to clear the blockage: endarterectomy or angioplasty. Although it is less invasive, the risk of stroke is higher following angioplasty, making it the riskier choice for most people.
Triglycerides: A big fat problem
Triglycerides contribute to atherosclerosis, usually (but not always) in combination with other factors. Triglycerides can be lowered by making lifestyle and diet changes, though some people also need a medication.
Heart Beat: Sexy supplements can be bad for the heart
Several supplements marketed to men claiming to enhance sexual function were found to contain substances similar to erectile dysfunction drugs, which can be dangerous for men who take certain medications for heart disease.
Heart Beat: Pedometer-powered walking
A pedometer is an inexpensive tool that can help promote fitness by measuring a person's steps. Being aware of how much one is walking acts as a motivator to walk more.
Heart Beat: New blood sugar measure
The American Diabetes Association has set a standard for measuring blood sugar that expresses the information in two different forms, corresponding to the way the reading is given in medical tests and also in home testting.
Brief updates on tingling stents, Alzheimer's disease and blood pressure, exercise as medicine, and vascular disease in women.
Ask the doctor: Is this pain from my heart?
Every now and then I get a sudden, sharp pain on the left side of my chest, like a knife. I get a little dizzy, and then it disappears as quickly as it came on. Even though the test results were fine, it still scares me. Should I be concerned about this?
Ask the doctor: Could heart surgery have affected my lung?
Almost three years ago I had triple bypass surgery and mitral valve replacement. I did so having only one lung. It feels like the operation somehow harmed my lung, making it harder to breathe. Can heart surgery do this?
Ask the doctor: Is my blood pressure normal?
I'm a 75-year-old woman. The top number of my blood pressure is between 135 and 140, the bottom number around 75. My doctor says this is fine. Should I believe him?
Ask the doctor: Are all dark chocolates good for the heart?
Dark chocolate is supposed to be good for the heart. But how do I know which chocolate is "dark"? Some labels list percent dark chocolate, others percent cocoa solids. Can you help me pick the best one?
Several medications commonly prescribed to heart patients can cause adverse reactions, even when taken as directed. Therefore caution should be used and any unusual symptoms or side effects should be reported to a doctor right away.
Electrocardiogram: Visualizing the heart's electrical signature
An electrocardiogram test is an important tool in diagnosing heart disease. Doctors can analyze the information to evaluate a patient's condition and identify problems.
Focus on hormones: Estrogen therapy - benefits in the timing?
Estrogen is beneficial for controlling symptoms of menopause, but its benefits for younger women seem to become risks for older women, so it should not be taken to prevent heart disease.
Focus on hormones: Testosterone therapy's benefits, risks need crystallizing
Lower levels of testosterone may correlate with a higher risk of heart disease, but taking a testosterone supplement may also increase the risk of prostate cancer and reduce beneficial HDL cholesterol.
Heart Beat: It's never too late for healthy eating and exercise
Changing one's dietary and exercise habits is beneficial to overall health, regardless of age.
Heart Beat: Lopsided decline in heart disease deaths
Death rates from heart disease had been declining since the 1960s, but recently they have leveled off in men and increased very slightly in women, probably due to increases in obesity and diabetes.
Heart Beat: Hole in the heart
A patent foramen ovale is a small hole in the heart which usually closes soon after birth, but if it does not, it may be a cause of stroke later in life.
Heart Beat: It was only a matter of time: Plant sterols in chips
Plant sterols have been shown to reduce cholesterol, so a company is manufacturing tortilla chips with sterols added.
Heart Beat: Good news keeps brewing on coffee and heart disease
According to an Italian study, heart attack survivors can safely drink a cup or two of coffee a day without increased risk for additional heart disease.
Heart Beat: A Chia Pet for diabetes?
A seed related to those used in Chia Pets is a good source of fiber, protein, and antioxidants, but the same benefits can be obtained from eating whole-grain food products.
Heart Beat: Red light, green light on medications
Certain medications used to treat anemia may increase the risk of heart attack, while the FDA concluded that the heartburn drugs Prilosec and Nexium are not harmful to the heart.
Heart Beat: Checking blood pressure at home
People with high blood pressure who regularly monitor their blood pressure at home seem more likely to lower their pressure over time than those who only have it checked at a doctor's office.
Ask the doctor: Is it okay to take aspirin, Plavix, and warfarin?
I am 85. I had an angioplasty with a stent and I'm on aspirin and Plavix. Now I have atrial fibrillation, and my doctor wants me to take Coumadin. Is this dangerous? Should I stop taking aspirin and Plavix? Or could I just take them without the Coumadin?
Ask the doctor: Can a pacemaker cause dizzy spells?
My father recently had a pacemaker installed. Now, almost every time he stands up or gets out of bed, he feels dizzy. Is this a common side effect of getting a pacemaker, something he needs to get used to? Is there anything he can do about it?
Angioplasty or bypass surgery?
Blocked arteries can be resolved by either angioplasty or bypass surgery. Angioplasty is a much easier procedure, but frequently needs to be repeated later. For some, medication, exercise, and changes to diet are more effective than either procedure.
Trial fails to enhance cholesterol drug's reputation
Research found that the cholesterol drug Vytorin, which combines Zetia with the statin Zocor, is no more effective than a statin alone at preventing plaque from growing in arteries.
Big trouble from small arteries
Coronary microvascular disease, which affects the smallest arteries in the heart, is difficult to detect because of the small size of the vessels, but tests are improving, and awareness of the condition among doctors is growing.
State-of-the-heart therapy for prostate cancer
Aggressive prostate cancer can be treated by using hormone therapy to lower testosterone, but it can result in higher cholesterol, high blood pressure, stiffening arteries, and other heart-unhealthy conditions.
Heart Beat: Golden opportunity to fight heart disease
Research has found a potential connection between low levels of vitamin D and increased risk of heart attack, stroke, heart failure, and other cardiovascular conditions.
Heart Beat: Small price to pay for an extra 14 years
People who exercised, ate a diet rich in fruits and vegetables, drank alcohol in moderation, and did not smoke lived an average of 14 years longer than others who did not do any of these things.
Heart Beat: Dual duty for WelChol
The FDA has approved WelChol, a medication that helps to lower both LDL cholesterol and blood sugar, which may be beneficial to some diabetics, though they would still have to take insulin along with WelChol.
An excess of aldosterone, a hormone produced by the adrenal glands, can increase the risk of heart disease. The problem can be treated with medication, but sometimes it is necessary to remove the glands.
An article on very high triglycerides in the February 2008 issue neglected to mention that cutting back on processed carbohydrates and replacing them with whole, minimally processed foods can substantially lower triglycerides.
Ask the doctor: Do people on warfarin need to avoid vitamin supplements that contain vitamin K?
You mentioned Centrum Cardio, a new multivitamin supplement that supposedly lowers cholesterol. Taking two tablets a day would deliver 25 micrograms of vitamin K. Is it wise to recommend this product for someone taking warfarin or other anticoagulants?
Ask the doctor: Is a nuclear imaging stress test the same thing as an exercise stress test or exercise echocardiogram?
I had some chest pain on vacation. The doctor told me to have a nuclear imaging stress test when I got home. My physician sent me for a treadmill test. But the cardiologist had me do an exercise echocardiagram on a bicycle. Are these tests the same?
Ask the doctor: Are medications for ADHD safe for the heart?
I am a fairly healthy 52-year-old man. For many years I have felt like I have ADHD. A recent work-up confirmed my suspicion. My doctor suggested I take Ritalin. Is that okay for the heart?
Know the warning signs
Many people do not know or recognize the warning signs of a heart attack, stroke, or cardiac arrest. Because the brain may be deprived of oxygen during such an event, quick action could save someone's life.
ACCORD's discord on blood sugar control
A clinical trial measuring the effectiveness of maintaining tight control of blood sugar in diabetics was ended early due to a higher than expected number of deaths, but keeping blood sugar below a certain level is still important for those with diabetes.
It's time to accentuate the positive
A study found that people who maintained a positive approach to life in their thoughts and feelings, referred to as high emotional vitality, had a lower risk of heart disease.
Get a hearty start on the day
Eating a healthy breakfast has always been a smart nutritional choice. A breakfast containing whole grains and fruits may lead to reduced risk of heart attack, stroke, diabetes, or other cardiovascular ills.
A new crystal ball
Researchers with the Framingham Heart Study have developed a new tool to assess overall risk of cardiovascular disease by assigning points to various risk factors, then aligning total points with levels of risk.
Heart Beat: Heparin: a risky bridge over troubled waters?
People who take warfarin to prevent blood clots who need to undergo a surgical procedure generally need to stop taking the drug a few days beforehand, but for those who are at high risk of a clot, taking heparin instead for a few days may be necessary.
Heart Beat: Bypass surgery no barrier to sexual satisfaction
A survey of patients both before and several years after bypass surgery found that men were more satisfied with their sex lives than before surgery, but women were less satisfied than before.
Heart Beat: Dangers of skipping medications after a heart attack
After a heart attack, getting prescriptions filled and taking the medications consistently is crucial. Those who do not do so are at a much higher risk of dying within the year following an attack.
Heart Beat: Kudos on cholesterol?
The average American's cholesterol level has dropped for the first time, but this is due more to the prevalence of statin drugs than to improvement in people's dietary habits.
Heart Beat: New guidelines for bleeding disorder
The National Heart, Lung, and Blood Institute has released guidelines to assist physicians in identifying patients with von Willebrand disease, a blood disorder that prevents clotting.
Heart Beat: Panic attacks linked to heart disease
According to a study, women who suffered from panic attacks were more likely to have a heart attack or stroke than those who did not.
Brief updates on a possible connection between age-related macular degeneration and heart disease, the effectiveness of diuretics to treat metabolic syndrome, and an alternate way to test for claudication.
Ask the doctor: Are heart drugs causing my nighttime leg cramps?
Many nights I wake up once, twice, or several times with leg cramps. The only medications I take are a statin, niacin, Plavix, and baby aspirin. Are any of them causing these aggravating cramps?
Slow rehabilitation of drug-coated stents
Drug-coated stents were thought to cause a higher occurrence of thrombosis (clots) compared to bare-metal stents, but further research has shown the incidence to be about the same. Many people with a blocked artery could benefit from a drug-coated stent.
Don't delay when heart failure threatens
People living with heart failure need to pay attention to warning signs, such as shortness of breath, or swelling of the ankles or feet, that may indicate a worsening of their condition.
A second look at beta blockers and blood pressure
Beta blockers have helped millions of people lower their blood pressure, but for people with hypertension who do not have other cardiovascular issues or symptoms, a beta blocker might not be the most effective medication.
Seeing the heart with sound
An echocardiogram creates images of the heart using sound waves. It can reveal a great deal of information useful to doctors in treating heart patients.
Heart Beat: Warfarin home monitoring program expanded
People who take the medication warfarin need to test their blood regularly to monitor its clotting time. A Medicare program is providing the equipment and means to do this testing at home.
Heart Beat: Chrome dome doesn't mean sicker ticker
Researchers found no substantive links between baldness and the risk of a heart attack, or between baldness and the buildup of plaque in carotid arteries.
Having a home defibrillator was found to be no more useful at saving the life of someone in cardiac arrest than having family members trained in CPR.
Ask the doctor: Can I keep myself from fainting when I have blood drawn?
I want to donate blood, but I faint or come close to it nearly every time I have blood drawn at the doctor's office. Can I do anything to keep myself from fainting?
Ask the doctor: Is high blood pressure in the morning a problem?
My blood pressure is high when I first get up in the morning, but always drops back to normal by 9 a.m. and stays that way throughout the day. I take Avapro. My doctor says I shouldn't worry about the temporary high morning pressure. What do you think?
Age no barrier to blood pressure control
Doctors used to worry that the potential harm of blood pressure medication outweighed any benefit to elderly patients, but a study found that the medication did reduce the incidence of heart disease, stroke, and premature death in older patients.
Going after angiotensin
An angiotensin-receptor blocker controls blood pressure as well as an ACE inhibitor, but taken together they are more likely to cause unwanted side effects. ACE inhibitors are also available in generic form, which costs much less.
Fish and fish oil: Good for most folks, but not all
Fish oil contains beneficial omega-3 fats, but people with heart failure, angina, or an implanted cardioverter-defibrillator (ICD) should minimize their consumption of fish, and should not take fish oil capsules.
Does fitness offset fatness?
A person can be overweight and still be fit and healthy, but it is still better for the body to lose weight if possible, and even better to lose weight and get regular exercise.
Taking heart disease to new heights
Travel to high-altitude locations is risky for people with heart disease, but knowing the limitations of the condition and taking proper precautions can make the trip possible for some.
Heart beat: Air pollution fails the heart, vitamins may help
Air pollution is harmful to the heart, but a diet with adequate amounts of B vitamins and methionine, an amino acid, may counteract the health problems caused by pollution.
Heart beat: Heart-stopping thrills
Roller coasters and other high-velocity amusement park rides can cause spikes in heart rate and blood pressure that may be dangerous for riders with heart problems.
Heart beat: Hands-only CPR
The American Heart Association has revised its guidelines for administering CPR to a victim of cardiac arrest, and now recommends using only firm, quick chest compressions.
Heart beat: Sweeter note sounded for iPod users
An earlier advisory to keep an iPod away from the heart of a pacemaker wearer may not be necessary, but caution is still advised.
Heart beat: Calcium scan benefit still uncertain
A CT scan for calcium buildup in arteries near the heart can help predict the likelihood of an attack, but its cost outweighs its usefulness in people with low risk of heart disease.
Brief updates on why diabetics should limit their consumption of eggs, attempting to patch a hole in the hearts of some migraine sufferers, and a possible connection between clogged vein grafts and depression.
Ask the doctor: Should I worry about my low diastolic pressure?
I am a 70-year-old woman with type 2 diabetes. I take Glucophage, Glucotrol, and a statin. My diastolic pressure used to be between 70 and 80, but now it's down into the 50s and 40s. This seems low to me. Could this indicate a problem?
Ask the doctor: Does Tricor cause gallstones?
I started taking Tricor because I have low HDL and high triglycerides. Someone I know at work developed gallstones after being on Tricor for a while. Is this a common side effect? If so, is there another medication I can take?
Hypertension and diabetes - double trouble
People with hypertension should be tested for diabetes. Treating both conditions with lifestyle changes (exercise, weight loss, quitting smoking) can substantially reduce the risk of a heart attack or stroke.
Balancing hope and reality in heart failure
A hopeful outlook can help heart failure sufferers live with the condition, but hope must be tempered by reality in order to achieve a clear understanding of the limits of treatment.
Joint inflammation may point the finger at heart disease
Inflammation from rheumatoid arthritis, lupus, or gout has been linked to an increased risk of heart disease. Drugs are being tested, but the current advice is to exercise, eat a healthy diet, and control weight, blood pressure, and cholesterol.
When quitters are winners
Many people do not realize that smoking is as bad for cardiovascular health as it is for the lungs. Quitting has some almost-immediate benefits, and after 20 years quitters have the same risk of death as nonsmokers.
Heart beat: Trial questions beta blockers for all before noncardiac surgery
The recommendation to take beta blockers before noncardiac surgery is now tempered by the information that the drug should be started a few days prior to surgery.
Heart beat: DASH diet ignored
While the DASH diet helps people lower their blood pressure and reduce the risk of heart disease, fewer people are following it.
Brief updates on a warfarin information booklet, a possible link between loop diuretics and bone loss, and a drug for peripheral artery disease that may also help prevent strokes.
Ask the doctor: How long do I need to keep taking Plavix?
I need to catheterize myself. I had stents put in my heart and started taking Plavix. I sometimes see a tinge of blood in the catheter bag, though lately the blood flow has been more substantial. I am 88 years old. How long will I need to take Plavix?
Ask the doctor: Can I have my hernia fixed while taking Plavix?
I had stents placed. My doctor told me to take Plavix and aspirin indefinitely. Now I need a hernia repair. My surgeon said to stop taking the drugs before the operation. My cardiologist says that would increase my chance of a clot in one of the stents.
Ask the doctor: Are community heart check-ups worth doing?
I often get mail from companies like Life Line Screening about having tests to look for "hidden" heart risks. The events are usually held at a local church and cost about $130. Are these tests valid? Are they worth the money?
Ask the doctor: Is sotalol making me tired and heavier?
My doctor put me on a calcium-channel blocker, but after I had angioplasty and got a stent, my doctor switched me to sotalol. Now I feel tired all the time and have gained weight, even though I feel like I'm eating less. Can this be from the sotalol?
Checking blood pressure: Do try this at home
Regular home blood pressure monitoring, for people with hypertension or those who are at risk for it, is a recommended health practice that can help keep blood pressure under control.
Aches and pains - is your statin to blame?
About ten percent of people who start taking a statin experence muscle pain. Usually this will go away on its own, or by adjusting the dosage of the medication or switching to a different one.
Get the lead out
The leads of implanted cardiac devices can break or become infected over time. If this happens, the leads must be replaced. A defective lead can be left in the heart, but it is considered safer to have it removed.
Mediterranean diet sails well in the USA
Long-term research finds that following a Mediterranean diet, which emphasizes vegetables, fruits, whole grains, and fish, can reduce the risk of heart disease, benefit heart attack survivors, and help with weight loss.
Heart beat: Post-heart attack angina common, and commonly untreated
A study found that a year after a heart attack, about 20% of people were still suffering from angina. A program of cardiac rehabilitation can strengthen the heart and help eliminate angina's pain.
Heart beat: Heart disease a major killer among people with HIV/AIDS
People living with HIV are more susceptible to heart disease, so it is important for them to stay healthy and fit, in order to ward off risk factors like high triglycerides, low HDL cholesterol, and hypertension.
Brief updates on controlling heart rate with medication, quality of heart attack care at night and on weekends, and vitamin D's importance to the heart and arteries.
Ask the doctor: What's the difference between blood sugar and hemoglobin A1c?
In your article on blood sugar control, you kept talking about hemoglobin A1c. I measure my blood sugar all the time, but my meter doesn't have a setting for a percentage reading. Is there a simple connection between blood sugar and hemoglobin A1c?
Ask the doctor: What's the connection between statins and coenzyme Q10?
Why don't you ever tell your readers that everyone who takes a statin to lower cholesterol should be taking coenzyme Q10, too?
Dial 911 when a heart attack has your number
Starting treatment quickly provides the best chance of surviving a heart attack. If you think you are having a heart attack, you should call 911 rather than have someone take you to the hospital, because paramedics can start treatment on the way there.
Alcohol's cardiac effects differ by sex
Women who drink too much alcohol risk cardiovascular problems and an increased chance of breast cancer, but in moderation (no more than one drink per day) alcohol is associated with a lower risk of heart disease and stroke.
The ankle-brachial index is a comparative blood pressure test, taken at the arms and ankles, that is done to check for the presence of peripheral artery disease.
Smaller surgery speeds recovery from valve fix
A minimally invasive version of heart surgery to replace or repair a mitral or aortic valve has shown excellent results and has a shorter recovery time on average, making it an option that older people should consider.
Heart beat: Coffee: A connection to good health?
Coffee lovers should be reassured by a study showing that death rates among coffee drinkers were no higher than for people who did not drink coffee.
Heart beat: CT scans may interfere with pacemakers, other devices
People with a pacemaker or ICD know they need to avoid MRIs. The FDA says that the CT scans may also interfere with these devices, but that they can be safely shut off for the duration of the scan.
Heart beat: Tapping the power of potassium
Potassium helps fight high blood pressure, but most Americans do not get enough potassium from their diet. Eating more fruits, vegetables, beans, and certain other foods boosts potassium levels.
Brief updates on the heart benefits of obesity surgery, a heart medication's effect on bone health, and taking statins prior to heart surgery.
Ask the doctor: Is my blood pressure medicine changing my ability to taste?
My sense of taste isn't as good as it was a few months ago. I started taking Capoten on top of the diuretic I have been taking for some time to control my blood pressure. Could the new drug be affecting my sense of taste? If it is, what can I do about it?
Ask the doctor: How is a blocked stent fixed?
What happens when a stent gets clogged up? Someone told me that a new one gets put over the plugged-up one, but that doesn't sound right.
Ask the doctor: How do I know if my new valve isn't working correctly?
I had a mechanical valve put in to replace a stiff aortic valve. But it hasn't made a big difference in how I feel. I still get short of breath when I try to walk fast. Could it be the wrong size, or not working properly?
Sleep apnea wakes up heart disease
Sleep apnea causes increased production of stress hormones, faster heart rate, increased blood pressure, and inflammation. Research has found that people with the condition are more likely to die of cardiovascular disease.
Flap over tilapia sends the wrong message
Tilapia has been criticized because it is lower in omega-3 fats and higher in omega-6 fats than other kinds of fish, but it is still a good source of protein and has other nutritional value.
No-surgery valve repair puts excitement to the test
Progress is being made on developing less invasive surgeries to replace or repair heart valves, but more research is needed before they become a viable option for the average person.
Living with long QT syndrome
Long QT syndrome is a lengthening of part of the heart's normal rhythm that occurs when its muscle cells do not properly process certain substances. The resulting erratic heart rhythm can cause fainting, shortness of breath, and possibly death.
Slow down and savor the flavor
Eating meals more slowly allows the stomach time to signal the brain when it is getting full, which can result in lower overall food consumption. Drinking water with your meals can also help by making you feel fuller.
Heart Beat: Drugs, angioplasty nearly equal for angina relief
A clinical trial that compared angioplasty with aggressive drug therapy for treatment of angina found both treatments about equally effective.
Heart Beat: Uncertainty dogs Zetia and Vytorin
The cholesterol-lowering drug ezetimbe, sold as Zetia and in combination with the statin Zocor as Vytorin, has proved no better at reducing plaque than the statin alone, and may be linked to an increased risk of cancer.
Brief updates on a drug combination that may cause muscle damage, a blood test for rejection after a heart transplant, a possible link between retinopathy and heart disease, and running for heart health and longevity.
Ask the doctor: Is low blood pressure a problem?
You are always talking about high blood pressure. Mine is always on the low side, about 80/60. Is that a problem?
Ask the doctor: Are there different kinds of heart failure?
Several years ago, a friend in my sewing circle was diagnosed with congestive heart failure. My doctor just told me I have heart failure. Are these the same condition or different ones?
Resistant hypertension needs special attention
Blood pressure that stays high even when three or more medications are taken is called resistant hypertension. In such cases lifestyle changes are especially important, and there may be underlying causes such as sleep apnea.
Wrist artery a safe approach to the heart
Most angioplasty procedures are performed through the femoral artery in the groin, but the radial artery in the wrist is also a viable access point, and may be slightly safer for some patients.
Beats per minute a signal of heart health
A resting heart rate above 100 beats per minute may be an indicator of more serious conditions such as atherosclerosis. Making an effort to exercise and reduce stress can help slow the heart to a healthier rate.
Folic acid: Too much of a good thing?
Because some foods are now fortified with folic acid, people who take multivitamins may be getting too much of it. This can block the body's ability to process folate, the natural form of folic acid, which in turn may be linked to heart disease.
Pre-dental antibiotics for few, not many
In a reversal of its previous advice, the American College of Cardiology says that most people with heart disease do not need to take antibiotics before having dental work done, but people in certain categories still need the medication.
Heart Beat: New COPD medications seem okay for the heart
There is some concern that drugs used to keep airways open in people with COPD may increase the risk of heart disease, but the testing done so far suggests that the medications are safe.
Heart Beat: Green tea and statins
People who take a statin may want to watch their intake of green tea, as there is a possibility it may boost the blood concentration of the medication to pain-causing levels.
Heart Beat: Hot flashes and the heart
Postmenopausal women who continue to experience hot flashes may be at increased risk of having high blood pressure, high cholesterol, or clogged arteries.
Brief updates on niacin's beneficial effect on HDL, exercise's aid in preventing atrial fibrillation, and an FDA web site with information about guidelines for drug ads.
Ask the doctor: Is it dangerous to have calcium in the aorta?
A test showed that I have calcium in my aorta. My doctor said it isn't serious, and that, as a 79-year-old, I will have to "live with it." Can you tell me more about this condition?
Ask the doctor: Do calcium supplements counteract calcium-channel blockers?
My doctor started me on a calcium-channel blocker for high blood pressure. I also take a daily calcium supplement for my bones. Will that counteract the drug's effect?
Ask the doctor: Can you get a stent after bypass surgery and vice versa?
A friend told me that if you get a stent you can't have bypass surgery later on. Is that right? And what about the opposite - getting a stent after having bypass surgery?
Put some bite into heart disease prevention
Researchers are exploring how bacteria in the mouth might play a role in heart disease, though there is still no conclusive evidence that the two are linked.
Blood pressure drugs can boost blood sugar
Among the many types of blood pressure medications available, some have a tendency to increase blood sugar levels, but this does not necessarily lead to a higher risk of diabetes.
Fish: Friend or foe?
While toxins such as mercury and PCBs are present in seafood, the amounts are considered safe, and the health benefits of omega-3 fats are much more significant than any risk posed by the toxins.
No benefit for late angioplasty after a heart attack
Angioplasty to treat chest pain will be most efffective if the procedure is done within the first 12 hours after onset. If you have had symptoms for longer, drug treatment is likely to be as effective as angioplasty.
Heart Beat: Dancing away from heart failure
A study comparing different forms of exercise for people with moderate heart failure found that ballroom dancing was as effective as a traditional exercise regimen, and also improved patients' quality of life.
Heart Beat: Too soon to sell a gene test for warfarin?
Determining the correct dosage of warfarin for a heart patient can take several weeks. A company is selling a test that it claims will shorten the process, but there is no evidence yet to support the claim.
Heart Beat: Is there an afterlife for pacemakers and defibrillators?
Pacemakers and other implanted cardiac devices can be removed after a person's death and recycled for patients who cannot afford them. If you wish to do this, you should have a medical directive stating so.
Ask the doctor: What are the symptoms of, and tests for, an enlarged heart?
How would I know if I had an enlarged heart?
Ask the doctor: What's the skinny on fat-free half-and-half?
Is fat-free half-and-half good for you?
New drug fizzles at raising HDL
With the failure of torcetrapib, a drug that its maker hoped would raise HDL cholesterol, people seeking to lower their heart disease risk should rely on traditional strategies: exercise, diet, weight control.
Some blood pressure drugs act as a skeleton key
In addition to their known benefits as drugs that lower blood pressure, research suggests that thiazide diuretics, beta blockers, and ACE inhibitors may also help protect and strengthen bones.
Late blood clots tarnish drug-coated stents
After several years on the market, there is clear evidence that drug-coated stents pose a small but definite risk of causing blood clots. If you have chest pain, adopting a healthier lifestyle may be a safer alternative to having a stent implanted.
Heart Beat: Alcohol and high blood pressure
While people with high blood pressure are typically told to abstain from alcohol, a study suggests that moderate alcohol consumption may help prevent them from having a heart attack.
Heart Beat: Fatal attraction?
A new, stronger type of magnet used in some jewelry and clothing can interfere with the operation of a pacemaker or other implanted cardiac device if placed too close to it.
More Americans have their hypertension under control. Folic acid does not prevent heart attacks. Exercise after heart surgery is safe and beneficial. Program your cell phone with an emergency contact.
Ask the doctor: Do grapes and grape juice protect the heart like wine does?
For the health of my heart and arteries, how does regular consumption of red wine compare with grape juice or the equivalent in grapes?
Ask the doctor: Is it dangerous for me to go over my target heart rate?
My resting heart rate is on the high side, and it rises quickly when I exercise. I am afraid to go faster than 2 miles an hour on the treadmill, and I don't feel like I'm getting a real workout. Is it dangerous for me to go over my target heart rate?
Mixed marks for heart surgery report cards
A few states have begun to compile data on the success rates of cardiac surgeons, but the information may be outdated or otherwise inaccurate. One suggestion is to choose a doctor who regularly performs the surgery you need and has done it many times.
9 ways to protect your heart when diabetes threatens it
Most people with diabetes eventually develop some form of heart disease, but this is not inevitable. Focusing on improving health through diet, exercise, weight loss, and lowering blood pressure and cholesterol can help prevent heart disease.
Aldosteronism: Too much of a good thing
An excess of aldosterone, a hormone produced by the adrenal glands, causes hypertension in some people. Treatment depends on whether one gland or both is affected.
Statins for aortic valve narrowing?
Some research suggests that cholesterol-lowering statins may help prevent narrowing or hardening of the aortic valve, but there is not enough evidence to indicate you should start taking a statin if you are not taking one already.
Different shades of gray for post-heart attack depression
Depression can often develop as a result of a heart attack or cardiac surgery, and has more serious effects on heart health and overall health than depression that was present before a heart attack.
Heart Beat: Home defibrillator skills slip away
Research found that people who have a home defibrillator for use in case of a cardiac emergency tended to forget how to use the device over time. If you have such a device, it is vital to know how to use it and to maintain this knowledge.
Heart Beat: Parkinson's drugs linked to heart valve trouble
Two drugs used to treat Parkinson's disease have been found to cause heart valve leakage. If you take one of these medications, you should ask your doctor about switching to a safer one. If no other drug is effective, watch for signs of valve trouble.
Heart Beat: New Start! for exercise
The American Heart Association has launched a web site to help people track their eating habits and exercise, and offers tips and encouragement toward living a healthier life.
Heart Beat: Tea with a twist
If you drink tea, taking it with milk seems to negate any positive effect from the antioxidants it contains. However, there is still no definite evidence that tea can protect you from heart disease.
Ask the doctor: Is a lot of exercise bad for the heart?
I know that exercise is good for my heart, which is one reason why I took up long-distance running. But I have heard that marathon running damages the heart. Is that true?
Ask the doctor: Does it matter when I take a statin?
My doctor put me on a statin and told me to take it after dinner. I would rather take it with breakfast. Does it matter?
Heart scans hold intermediate promise
CT scans can detect calcium in arteries, but the presence of calcium does not automatically indicate the presence of heart disease. The test may still be of some benefit to those with an intermediate risk but without symptoms.
A new way to control blood pressure
The FDA has approved a new blood pressure drug that works by inhibiting hte production of renin, a substance made by the kidneys that is the first step in the body's system of regulating blood pressure.
New tool refines heart risk prediction
The Framingham heart disease risk-assessment tool has been refined and improved with the addition of several new risk factors that contribute to the overall score and make it a more accurate predictor of heart disease risk.
What the latest diet trial really means
The Atkins diet helped women lose weight more quickly compared to other diets, but long-term eating strategies that match food intake to calories burned are the most effective way to maintain a healthy weight.
Heart beat: As the hammock swings
A Greek study found that taking a nap may decrease the risk of heart disease, but the results may be due to other factors, such as a lower-stress lifestyle, so they cannot be interpreted as cause and effect.
Heart beat: States of the heart
A survey by the federal government provides data for a visual representation showing the incidence of heart disease in the United States.
Heart beat: Study suggests limiting use of aspirin plus warfarin
The combination of aspirin and warfarin is prescribed to prevent clotting, but it does not have this effect in all heart disease patients, and can sometimes cause stomach bleeding.
Ask the doctor: Is weight lifting safe if I have a stent?
I am 58 and have had several stents implanted. I used to lift weights, but stopped after getting the stents. My blood pressure is good, and I take medications. I want to resume lifting but worry that it could cause a heart attack. Is that possible?
Ask the doctor: Is my breathlessness a heart or lung problem?
I had a quadruple bypass seven years ago. A few months back I found myself taking frequent short breaths when I climbed the stairs. Once I stopped exerting myself, my breathing soon returned to normal. Is this due to a problem with my heart or lungs?
Ask the doctor: Does a low ejection fraction doom me to inactivity?
At age 64 I had a severe heart attack that left me with a 20% ejection fraction. A recent echocardiogram showed that my heart is getting larger. I have no shortness of breath or swelling, but wonder how long I have before symptoms of heart failure appear?
Ask the doctor: Can eye drops for glaucoma affect the heart?
I was recently diagnosed with glaucoma. My eye doctor prescribed eye drops to reduce the pressure inside my eyes. After a short time I had to stop using them because they made me dizzy and my heartbeat felt strange. What else can I do for my eyes?
Yellow light on pain relievers
While there is a risk of adverse effects from any pain reliever, most people can take them safely. Use common sense, and have your blood pressure checked regularly if you are in a higher risk group due to heart disease.
Reception still fuzzy for fast CT scans of the heart
A new type of CT scan produces a clearer image of the heart than current methods, and without the need for an invasive catheter. Though there are some drawbacks to its use, the test may be helpful in emergency settings, when a quick diagnosis is needed.
COURAGE to make choices
A long-term study of treatment for stable coronary artery disease found that angioplasty was no better than the combination of medication and lifestyle changes at preventing future heart disease or prolonging life.
Heart Beat: Big bend for blood pressure?
A study reports that if the vertebra that supports the skull is misaligned, careful manipulation of it may result in a significant drop in blood pressure.
Heart Beat: High pulse pressure poses risk for atrial fibrillation
Pulse pressure is the difference between the high and low blood pressure measurements. A high pulse pressue (larger than 40) may lead to the development of atrial fibrillation, an irregular heart rhythm.
Generic versions of the blood-pressure drug Norvasc will save consumers money. A high-fiber diet may lower C-reactive protein, which contributes to inflammation of arteries.
Heart Beat: A heartfelt legacy from long-lived parents
According to the long-term Framingham Heart Study, having parents who live to age 85 or beyond may offer a greater degree of protection from heart disease and stroke.
Heart Beat: More evidence against trans fats
Researchers measured the amount of trans fat in blood cells and found that those with higher levels in their blood had a higher risk of having a heart attack or dying of heart disease.
New guidelines for CPR say to do chest compressions only and skip mouth-to-mouth breaths. Scientists are still looking for ways to boost HDL cholesterol. Periodontal treatment may be beneficial to the heart and arteries.
Ask the doctor: Can I stop taking my blood pressure medicine?
After taking atenolol for years, my doctor suggested cutting the dose. Then my cardiologist suggested that I stop taking it altogether. On the first day I didn't take it I felt "buzzed." My pressure began to creep upward. Did I bail out too quickly?
Ask the doctor: Should I take nitroglycerin during exercise?
I have a prescription for nitroglycerin, but I rarely need it. Every once in a while on the treadmill, I feel my chest tighten up. I take a pill, then continue exercising. I read a column by a doctor who said this is "ludicrous." What do you think?
What triggers heart attacks?
People who are at risk for heart disease may be fine for some time, until a stressful event or situation acts as a trigger for a heart attack or stroke. Even in those with heart disease, some of these triggers can be minimized or avoided.
Guidelines offer women a change of heart
Women are at as much risk for heart disease as men are. The American Heart Association has compiled a list of guidelines that offer a number of ways women can reduce their risk.
A road map to life in the fat lane
The body's process of turning fat into artery-blocking plaque is described and illustrated.
Heart Beat: Drive-through angioplasty?
Several European studies have found that the majority of people who undergo nonemergency angioplasty do not have to remain in the hospital overnight, but further study is needed before this practice becomes common in the US.
Heart Beat: Applying cardiac advances saves lives
As the guidelines for treating heart disease are informed by evidence from medical studies, these treatments become more common and survival rates increase.
Heart Beat: Migraine, heart disease linked
A study found that men who suffered migraines were more likely to have heart disease, but there is no evidence that migraines cause heart trouble, and no evidence as to what the connection, if any, may be.
Ask the doctor: Is warfarin turning my toes purple?
I take warfarin. I have blood blisters on my arms and sporadically on my legs. Recently I developed purple toes on one foot. My cardiologist didn't seem concerned and wouldn't explain whether it was the warfarin. Can you shed some light on this?
Ask the doctor: What causes C-reactive protein levels to vary?
I had a high-sensitivity C-reactive protein test that was 38.6, which my doctor said was quite high. My cholesterol was fine. A heart scan and stress test were normal. When my doctor repeated the test, my hsCRP was 6.1. What can cause such variations?
Ask the doctor: Can you really prevent heart disease?
People know that they can "prevent" heart disease by not smoking, losing weight, exercising, watching cholesterol and blood pressure, and eating right, but they still get heart disease. Is it really possible to prevent heart disease, or just slow it down?
Ask the doctor: Why does my father feel wires poking him in the chest months after open-heart surgery?
My father had open-heart surgery 18 months ago. Fairly soon after the operation, he started having the feeling that wires are poking him in the chest. Is that possible? If so, is there a solution?
Outlook on diabetes drug less than rosy
The diabetes drug Avandia may increase the risk of heart attack in those taking it. Other medications are as effective at lowering blood sugar without Avandia's risks.
Aspirin: A user's guide to who needs it and how much to take
People at risk for heart attack or stroke will likely benefit from taking low-dose daily aspirin, but for some there are greater risks (such as ulcers or gastrointestinal bleeding) that outweigh aspirin's help.
Heart Beat: Longer workouts better for boosting good cholesterol
Exercise helps boost the body's production of HDL cholesterol, but the amount of the increase can vary. The longer the workout, the more the body's HDL cholesterol level is likely to be raised.
Heart Beat: Pacemakers, iPods out of sync
An iPod or other digital music player held too close to the chest of a person with a pacemaker can interfere with the heart device's function. Cell phones can also cause this interference.
Heart Beat: Talking it up: speech and atrial fibrillation
Cardiologists found a case where too much talking was a trigger for atrial fibrillation.
Eating soy nuts may lower blood pressure slightly. Allegedly natural male enhancement products were found to contain substances almost identical to ED medications, which could be dangerous to men taking a nitrate medication for chest pain.
Correction and Clarification
The medication Actiq was incorrectly identified in a previous article as a treatment for pain related to heart disease. As it is a narcotic, it should only be used in special cases.
Ask the doctor: Are Lipitor and Crestor equally good for me?
have been taking Crestor (10 mg) for several years. Now my insurance company tells me it is dropping Crestor from its preferred drug list and suggests I replace it with Lipitor. Will this be okay?
Ask the doctor: Do beta blockers and ACE inhibitors help or harm the heart?
Is my long-term use of beta blockers and ACE inhibitors setting me up for heart failure? I understand these drugs keep my heart rate low. If the heart is a muscle, and muscles are strengthened by exercise, won't slowing the heart weaken it?
Shake the salt habit for a longer life
Salt intake affects blood pressure, and can increase the risk of heart disease. Most of the salt people eat is added to foods during processing, so it is relatively easy to reduce salt consumption by choosing foods more carefully.
When high cholesterol is a family affair
A form of high cholesterol that is inherited, called familial hypercholesterolemia, can cause LDL levels of 200 or higher. Those who have it are at high risk for heart disease.
Skipping a beat - the surprise of palpitations
There are many possible causes of heart palpitations, including smoking, stress, and some medications. Though they are typically not serious or life-threatening, it can be difficult to determine the underlying cause.
Heart Beat: Paying attention to potassium in heart failure
Some medications taken by people with heart disease to counteract water buildup in the body can remove too much potassium from the body, while others can leave too much behind.
Heart Beat: A square of chocolate keeps the doctor away?
A German study suggests that, because of the antioxidants it contains, eating a small amount of dark chocolate daily may lower blood pressure by a few points.
Heart Beat: Diabetes poses danger for the heart, body
Many people with diabetes suffer from one of the numerous potential complications of the disease, but following a proper diet, getting exercise, and paying attention to risk factors can help prevent further problems.
Heart Beat: Waist watching
An increasing waistline in middle age could be an indicator of metabolic syndrome, which is a group of risk factors that often leads to heart disease or diabetes.
Heart Beat: Warfarin trumps aspirin for stroke prevention in elderly
Researchers found that the blood thinner warfarin is more effective than aspirin at preventing strokes and blood clots, allaying concerns that it was too powerful to be taken safely by older people.
Splitting statin pills saves money without risk to cholesterol control. Treating or preventing heart disease lowers the risk of memory loss. Tests of coenzyme Q10's effectiveness are inconclusive.
Ask the doctor: Is red yeast rice good for lowering cholesterol?
I saw a newspaper column that said red yeast rice is a safer, more effective way to treat high cholesterol than statin drugs. Is that true?
Ask the doctor: Why is my blood pressure higher in one arm than the other?
I was admitted to the hospital with chest pain. The doctors found my blood pressure was much lower in my right arm than in my left. They rushed me for a CT scan, looking for a "tear." Fortunately there wasn't one. What was it they were worried about?
Scaling back on antibiotics
Medical organizations have long recommended taking antibiotics before certain procedures that could cause bacteria to infect the heart, but new guidelines state that only people with certain heart conditions or diseases need take the drugs.
Driving under the influence of abnormal heart rhythms
The guidelines for how long to wait before driving after having an ICD implanted have been revised, to reflect the growing number of people who receive the device preventively.
Add intervals of intensity for a stronger heart
Exercise that includes interval training, where periods of slight or moderate exertion are alternated with short bursts of more intense exertion, can strengthen the heart and blood vessels.
Sticking it to blood pressure?
The belief that acupuncture can lower blood pressure has been tested by studies, but the conclusions regarding its effectiveness are divided.
Heart Beat: Nothing fancy
Death rates from coronary artery disease have been falling since the 1980s, due to emphasis on the need to combat the problem by adopting healthier habits.
Heart Beat: Beans, beans, the magical fruit
Besides the nutritional benefits, eating beans regularly can lower cholesterol and reduce the risk of a heart attack.
Heart Beat: The Mysterious Human Heart
A special program series on PBS will explore the workings of the human heart.
Ask the doctor: Does Fosamax cause atrial fibrillation?
I am 86 years old and have been taking Fosamax to strengthen my bones for nearly 10 years. A few months ago, I suddenly fainted and was later diagnosed with atrial fibrillation. Did that happen because I was taking Fosamax?
Ask the doctor: Is it safe to fly with heart failure?
I was just diagnosed with heart failure. My husband and I like to travel. Is it okay for people like me to fly?
Come back to the garden of eatin'
The foundation of good nutrition, eating the right foods in the right quantities, provides clear benefits for the heart, but exercise and weight control are just as important to good heart health as diet.
No denying the power of produce
The multiple nutrients that occur naturally in fruits and vegetables are beneficial to the heart and the rest of the body in numerous ways, and thus should be a part of everyone's diet.
In with the good, out with the bad
Our bodies need protein, carbohydrates, and fat, but some kinds are better for us than others. It's important to eat the right kinds and quantities of these components in order to receive the most benefit from them.
Translating good food into better diets
Several diets with roots in medical studies have the potential to provide some protection from heart disease and to lower cholesterol and blood pressure.
12 tips for holiday eating
These suggestions can help you negotiate the excesses of the holiday season, so you can enjoy yourself without overindulging.
Ask the doctor: Is canned fish good for the heart?
I know that eating fish is good for the heart. But fresh fish costs a lot and I can't get to the grocery store very often. Does eating canned fish help?
Ask the doctor: How many calories do I need?
How can I figure out how many calories I should take in every day?
Genetic help for a blood-thinner balancing act
The FDA is recommending a genetic test for people prescribed warfarin, to search for variants of certain genes that can affect the drug's effectiveness. But there is not enough evidence yet that the test makes using the drug safer.
Protecting the heart during noncardiac surgery
Guidelines from the American Heart Association and the American College of Cardiology offer advice on protecting the heart during noncardiac surgery.
A blood pressure problem that's isolated in name only
Isolated systolic hypertension, when the systolic blood pressure is above 140 while the diastolic pressure is below 90, is caused by stiffening of large arteries. Medication may be prescribed, but lifestyle changes will have more impact on overall health.
Heart Beat: Teachable moment
Research has established that heart disease tends to run in families, so if a family member has a heart attack or stroke, it should serve as a motivator for other family members to see their doctors.
Heart Beat: Steering clear of pacemaker infections
A small but growing number of people develop an infection after having a pacemaker or ICD implanted. Research has found that taking antibiotics before the procedure reduces the risk of infection.
Heart Beat: Newer bypass technique may be safer for women
A study found that women who had off-pump bypass surgery had much lower rates of heart attack, stroke, or death during the operation or shortly after, more so than for men.
Heart Beat: ACE, ARB duet questioned
ACE inhibitors and angiotensin-receptor blockers are both used to combat stress hormones, which can contribute to heart failure. Combining them brings additional risks that outweigh any possible benefits.
Heart Beat: Take a shot against heart disease
A study found that heart disease deaths peaked each year during flu season, because the flu can trigger a heart attack or stroke. Those with heart disease, and those at higher risk of getting it, should get a flu vaccine each year.
Heart Beat: Too few get the best therapy for an ailing heart
Cardiac rehabilitation programs have been shown to reduce deaths in the years following a heart attack or stent procedure, but not enough patients participate in the programs.
Ask the doctor: How do I handle conflicting advice about exercise?
Last year, I had an aortic dissection. My surgeon says not to do any cardio or resistance exercise and to keep my heart rate down. My cardiologist says I can do light cardio and resistance exercise but to watch my blood pressure. Whom should I believe?
Ask the doctor: Is yerba mate good for my heart?
Is it true that drinking yerba mate can lower blood pressure and cholesterol? | 1 | 7 |
<urn:uuid:cc1f009c-40cb-4c39-8aec-eb096e5a9f9b> | HISTORY OF FLIGHT Use your browsers 'back' function to return to synopsisReturn to Query Page
On June 29, 2011, at 1523 mountain daylight time, a Cessna R182, N2344C, impacted an open field in Thornton, Colorado. The commercial pilot, the sole person on board the airplane, was fatally injured. A post impact fire ensued and the airplane was substantially damaged. The airplane was registered to Julair, LLC, doing business as All American Aerials, Incorporated, and operated by the pilot under the provisions of 14 Code of Federal Regulations Part 91 as a business flight. Visual meteorological conditions prevailed for the local flight which was being operated without a flight plan. The flight departed Front Range Airport (FTG), Watkins, Colorado, approximately 1425.
The pilot's wife said she spoke with him by telephone just before he took off. She said that the he told her that he was going to go up and "shoot a couple of thousand pictures." She said that he voiced no concerns abbout the weather or how his airplane was performing.
Approach control radar recorded a track depicting a Visual Flight Rules 1200 code at the time and in the area where the airplane would have been. The radar track showed the airplane come out of FTG (elevation 5,516 feet), fly up to the Thornton area, and begin a series of turns. The airplane was operating at an altitude between 5,800 to 6,300 feet mean sea level (msl) and a groundspeed of approximately 110 knots.
A review of radar information for the last 8 minutes of the flight, showed the airplane maneuvering just south of the E-470 toll way 2.23 miles northeast of the accident site at an altitude of 6,000 feet msl. The airplane made several orbits around the area of East 138th Court and Boston Street. At 1516:03, the airplane turned west to a heading of approximately 260 degrees. The airplane continued west at an approximate groundspeed of 112 knots until 1517:58, when the airplane made a left turn to the south. The airplane continued south on an approximate heading of 170 degrees for two and a half minutes until reaching 104th Avenue. The airplane turned northeast on an approximate 045 degree heading and continued northeast until 1521:03. The airplane then turned north and flew just east of Quebec Street at an altitude of 5,500 feet msl and a groundspeed of 94 knots until reaching 123rd Avenue. The airplane then made a left turn to the south. At 1521:54, the airplane disappeared from radar. The airplane’s last recorded altitude was 5,300 feet.
Witnesses said the airplane was maneuvering over the Thornton area at a low altitude at the same time that high wind suddenly occurred on the surface. One witness said he saw the airplane’s wings “dipping” up and down, and the airplane suddenly banked steeply to the left before impacting the ground. Several witnesses said that after the airplane impacted the ground, it exploded and the fire started.
The pilot, age 41, held a commercial pilot certificate with single and multi engine land, instrument airplane ratings. The pilot reported on renewal of his pilot insurance policy on December 6, 2010, a total flying time of 18,000 hours and 8,200 hours in the Cessna 182. The policy renewal indicated the pilot successfully completed a flight review on July 5, 2011. Pilot logbooks were never recovered and were suspected destroyed in the airplane.
The flight instructor who gave the pilot his last flight review said that that the pilot was a step above other pilots that he gave flight reviews to. He said that the day the pilot came to him for his flight review; the pilot told him that this was a checkride for him and he wanted to do everything that was in the Practical Test Standards for a private pilot. The pilot performed departure stalls, traffic pattern stalls, slow flight, turns around a point, and patterns and landings. The flight instructor said the pilot showed good knowledge and although he was not sure, thought he had some professional flight training.
Federal Aviation Administration pilot medical records indicated the pilot completed a class 2 physical in April, 2010.
A few days prior to the accident, the pilot spoke to another pilot that was based at Front Range Airport. The pilot told him that he was taking photographs of residential and commercial real estate from his airplane with a digital camera. The pilot told him that he had business in Colorado and had been in the area for about a week. The pilot told him how he flew the airplane and took photographs out of the pilot window at the same time. The pilot told him he had been doing it for some time and was pretty good at it. The pilot also told him of a time when while he was taking pictures, his airplane struck a guy wire. The pilot told him that it hit the wing just outside of the strut, but he was able to fly his airplane back and land it without incident.
The pilot’s wife spoke to the pilot by cellular telephone approximately 10 minutes before the pilot took off. She said that he was in good spirits and did not indicate that he was concerned with the weather conditions or the airplane’s capabilities. She also said that he was in good health.
The airplane was a 1978 Cessna model R182. Airframe and engine logbooks were not recovered and were suspected destroyed in the airplane.
A review of work orders reflecting maintenance performed by a repair station at the pilot’s home airport in Marshfield, Wisconsin, dating back to May 2008, showed that an annual inspection was performed in April 2010. At the annual inspection, the airframe had 10,091.4 total hours. Minor maintenance was performed on the airplane by the repair station in June, September, and October 2010, and February and March 2011. The last work order, dated March 28, 2011, indicated the repair station cleaned, greased, and cycled the landing gear system and adjusted the rigging on the right nose landing gear door.
At 1534, the aviation routine weather report for Denver International Airport (DEN), 12 nautical miles east-southeast of the accident site was winds 190 at 15 knots gusting to 21 knots, visibility 10 miles, thunderstorm, scattered clouds at 8,000 feet msl, broken ceilings at 13,000 and 20,000 feet msl, temperature 32 degrees Celsius, dew point 1 degree Celsius, altimeter 29.99 inches, remarks; thunderstorm beginning 1532, rain beginning 1516 ending 1525, occasional lightning in the vicinity south, thunderstorm in the vicinity south moving northeast, hourly precipitation amount zero inches.
The closest Terminal Aerodrome Forecast (TAF) reporting location to the accident site was Rocky Mountain Regional Airport (BJC). The TAF obtained for the accident time was issued at 1435 and was valid for a 21-hour period beginning at 1500. The TAF forecast for BJC expected wind from 350 degrees at 9 knots, visibility greater than 6 miles, scattered cumulonimbus clouds at 8,000 feet agl, and a broken ceiling at 15,000 feet. Thunderstorms were expected in the vicinity after 1600, with a temporary variable wind at 20 knots gusting to 35 knots, thunderstorm, and light rain, with a ceiling broken at 8,000 feet in cumulonimbus clouds.
At 1038, the National Weather Service (NWS) Forecast Office in Boulder, Colorado, issued a Hazardous Weather Outlook for central and eastern Colorado, which discussed a better chance for showers and thunderstorms developing during the afternoon with the main threat from these showers and thunderstorms being gusts to 50 miles per hour.
At 1225, the NWS Forecast Office in Boulder, Colorado, issued an Area Forecast Discussion for eastern Colorado, which discussed high based convection expected to develop into the afternoon with gusts to 35 knots likely in and near any showers or thunderstorms. Higher gusts were possible based on dry adiabatic mixing and these stronger gusts could cause landing and takeoff delays.
The Denver Center Weather Service Unit issues a Meteorological Impact Statement, valid at the time of the accident for the Denver Air Route Traffic Control Center (ZDV) area, advised that the low-level wind shear and microburst potential between 1300 and 1800 was moderate to high.
The pilot received takeoff clearance from Front Range tower prior to his departure. He confirmed the clearance and his intent to depart to the north. No further communications occurred between the pilot and any air traffic controlling agency.
A review of Flight Service Station records indicated the pilot did not contact them for any services.
WRECKAGE AND IMPACT INFORMATION
The airplane impacted in a rolling prairie grass field and came to rest inverted next to a horse pen approximately 330 feet northwest of a house. The elevation of the terrain in the area was approximately 4,800 feet msl.
The airplane wreckage path was along a common heading of 090 degrees magnetic. The wreckage encompassed an area defined by an initial impact point extending 112 feet to where the airplane main wreckage came to rest.
The first impact was evidenced by a 30-inch long scrape running parallel to the wreckage path followed by a spray of dirt that extended east for approximately 15 feet. In this area were several white colored paint chips.
A second point of impact was located 43 feet east of the initial impact mark. It consisted of an 18-inch wide, 12 inch deep smooth strike in the ground which produced a hole and dislodged a large piece of dirt that was 2 feet in front of the strike. The east side of the hole was smooth and showed gray paint transfer. At the right end of the smooth side of the hole were two parallel running white stripes which equated to the white strips at the airplane’s propeller blade tips.
In the immediate vicinity of the hole were large pieces of broken clear Plexiglas. The pieces were clean except for some dirt spray. Also in this area was the airplane’s magnetic compass, pieces of the upper engine cowling, broken pieces of the forward windscreen support posts, white colored paint chips, map pages, and personal items.
Approximately five feet left and two feet aft of the hole was the airplane’s right wing tip. It was broken longitudinally along the attachment rivets. The position light had been broken out.
From the second impact point extending east for approximately 39 feet was an area of debris which contained more pieces of clear Plexiglas, pieces of the fuselage, pieces of door post, and pieces of paper. At the end of the debris area was the right window frame. It was broken out of the door. The Plexiglas was gone, and it had sustained charring from the fire. Just east of the window frame was the airplane’s right cabin door. It was broken out at the hinges, was bent aft and buckled outward, and was charred. The door handle was in the closed and locked position and the locking pin was extended.
The airplane main wreckage consisted of the majority of the airplane’s remaining structure. The fuselage remains were oriented on a south-southwesterly heading.
The cowling, cabin, baggage compartment and aft fuselage to just forward of the empennage were consumed by fire. The left wing with exception of the forward spar was consumed by fire. The inboard portion of the right wing to include the fuel tank and flap were consumed by fire. The right wing outboard of the flap to include the right aileron was charred, melted and partially consumed. The main landing gear was charred. The wheels and tires were consumed by fire.
Flight control continuity was confirmed from the aileron actuators to the remains of the mixer bar and control yokes.
The airplane’s empennage was inverted and resting on the top of the vertical stabilizer and the tip of left horizontal stabilizer. The horizontal stabilizers and elevator showed heat damage, partial melting, and paint blistering. The left horizontal stabilizer was bent upward approximately 10 degrees at mid span. The vertical stabilizer and rudder also showed heat damage and paint blistering.
Flight control continuity was confirmed from the elevator and rudder to the remains of the rudder pedals and control yokes.
The airplane engine was resting inverted on the upper cowling forward of the consumed cabin area. The firewall and engine mounts were crushed downward and bent aft. The engine was intact and showed heat damage from fire, especially the aft section where the dual magnetos, oil filter, fuel pump, and vacuum pump were installed. The crankshaft was partially fractured just aft of the flange. The propeller hub was intact. Both propeller blades were broken in their mounts and fractured approximately 10 inches outboard of the hub. The hub and blade remains showed heat damage and partial melting.
A 26-inch long section of propeller blade was located 18 feet south of the main wreckage. It was fractured laterally across the face of the blade, approximately mid span. The fracture was consistent with an overload failure. The blade section, which included the blade tip showed chordwise scratches and paint rubs consistent with a ground contact. The section was bent torsionally and showed several nicks in the leading edge.
The airplane wreckage was recovered and transported to a repair station and salvage facility for further examination.
A post-impact fire ensued at the time the airplane impacted the ground. The fire burned an area that extended west to east along the airplane’s crash path for approximately 70 feet, and north to south for approximately 72 feet. The fire continued until county fire fighters arrived on the scene and extinguished the fire.
MEDICAL AND PATHOLOGICAL INFORMATION
An autopsy was conducted by the Adams County Coroner on June 30, 2011. The Coroner concluded the pilot died from blunt force injuries sustained in the crash.
Results of toxicology testing of samples taken were negative for all tests conducted.
TESTS AND RESEARCH
The airplane engine, systems, and instrumentation were examined at Greeley, Colorado. The engine showed heavy impact and fire damage to the accessories, wiring harness, muffler, and exhaust manifold. The case and cylinders were intact. The accessories were removed and the crankshaft and camshaft was rotated from the accessories case. The crankshaft and camshaft rotated normally. All valves, rockers, and pushrods showed normal movement. Thumb compression was confirmed on all 6 cylinders.
An examination of the flap actuator indicated the flaps were at a position approximating 10 degrees.
The landing gear was retracted. The elevator trim actuator found extended 1.4 inches, a position indicating nose up trim.
Flight and engine instruments were charred, melted, and partially consumed by fire. The fuel selector indicator and valve confirmed that the selector was in the “both” position, indicating both wing tanks were supplying fuel to the fuel pump and carburetor.
The Cessna R182 Pilots Operating Handbook shows the minimum stall speed at a weight of 3,100 pounds, most forward center of gravity, zero degrees of flap deflection, and zero degree bank angle to be 42 knots indicated airspeed. With 20 degrees of flaps extended, the stall speed decreases to 30 knots. | 1 | 3 |
<urn:uuid:e9751e23-e5ce-497c-b1fa-8a278cc87c7d> | Data Sources and Methods: Air Health Indicator – Ozone and Fine Particulate Matter
Canadian communities for which the ground-level ozone and fine particulate matter (PM2.5) concentrations were used for the national Air Quality Indicator of CESI were considered. The AHI is based on the criteria of having a reasonably complete time series of pollution and weather measurements, and enough daily mortality.
For each community there were three types of data used for the AHI: daily numbers of cause-specific deaths, air pollution concentrations, and potential confounders to the mortality-air pollution association.
Daily numbers of cause-specific deaths
The daily numbers of cause-specific deaths (non-accidental mortality data) were obtained from the national mortality database (Vital Statistics Database–Death 2004) maintained by Statistics Canada. Based on the International Classification of Diseases (ICD), the mortality data included only deaths from internal causes (ICD-9 code < 800 and ICD-10 code A00-R00), excluding external causes such as injuries. Regarding cause-specific deaths, in particular, we were interested in cardiopulmonary mortality related to the circulatory or respiratory system. For this specification, our mortality data were categorized into a cardiopulmonary group (ICD-10 code between I20–I50 and J10–J67). The cardiopulmonary mortality data were extracted by Statistics Canada for a specified census division only where the census division of residence was the same as the census division of death occurrence.
Air pollution concentrations
The daily ozone and PM2.5 (the latter measured by the tapered element oscillating microbalance method or TEOM) concentration data were obtained from the National Air Pollution Surveillance (NAPS) Network operated by Environment Canada. Established in 1969, NAPS provides accurate and long-term air quality data of a uniform standard across Canada to monitor the quality of ambient (outdoor) air in populated regions by specific procedures for the selection and positioning of monitoring stations. For each NAPS monitoring station, the daily average concentration for a certain day was calculated only if at least 75% of 24 hourly concentrations for that day (i.e. at least 18 hourly concentrations) were available. Otherwise, it was recorded as missing. For each census division, the daily average concentration was averaged over monitoring stations if there were 2 or more stations located in that census division. For the metric of air pollutions, the daily 8-hour maximum was selected for ozone and the daily mean for PM2.5
Potential confounders to the mortality-air pollution association
As for potential confounding variables to the exposure-mortality association, three factors were considered: time; temperature; and indicators for days of the week. Calendar time is included to control both temporal and seasonal variations. Daily temperature controls for the short-term effect of weather on daily mortality; and day of the week accounts for mortality that varies by day of the week. Specifically, to account for the weather effect, daily mean temperature data were obtained from the National Climate Data and Information Archive of Environment Canada. As for lifestyle factors such as smoking or cholesterol in the community, they do not vary meaningfully from day to day and thus can be ignored as confounders.
Twenty Canadian communities (Saint John, Québec, Montréal, Ottawa, York, Toronto, Peel, Oakville, Hamilton, Niagara Falls, Kitchener, Windsor, Sarnia, Sault Ste. Marie, Winnipeg, Regina, Saskatoon, Calgary, Edmonton, and Vancouver) were selected for ozone. Eighteen communities (Saint John, Québec, Montréal, Ottawa, Toronto, Peel, Oakville, Hamilton, Niagara Falls, London, Windsor, Sarnia, Waterloo, Winnipeg, Regina, Calgary, Edmonton, and Vancouver) were selected for PM2.5.
Each community’s geographic boundaries were defined by the census division associated with the city.
Yearly data for the years 1990 to 2008 were used for ozone and yearly data for the years 2000 to 2008 were used for PM2.5.
Mortality data are difficult to obtain and are a few years behind the other data. Raw 2007 data are now available but only the 2004 data were available in the correct format and details for use with the AHI. Consequently, the years 2005 to 2008 were approximated from the average of national annual risk (mortality data).
- Date Modified: | 1 | 6 |
<urn:uuid:cc1ff06d-da28-4f39-9073-a1e6716b404d> | Open Source Operating Systems
Linux kernel based OS
Kubuntu Linux is a user friendly operating system based on KDE, the K Desktop Environment. With a predictable 6 month release cycle and part of the Ubuntu project, Kubuntu is the GNU/Linux distribution for everyone. Improved desktop, updated applications and increased usability features are just a few of the surprises with this latest release.
Ubuntu Linuxbrings together the best of free and open source software delivered on a stable, easy to use and learn platform. Ubuntu claims to be always free of charge, including enterprise releases and security updates. It comes with full commercial support from Canonical and hundreds of companies around the world. Ubuntu includes the very best translations and accessibility infrastructure that the free software community has to offer.
Suse Linuxis a worldwide community program sponsored by Novell that promotes the use of Linux everywhere. The program provides free and easy access to openSUSE. Here you can find and join a community of users and developers, who all have the same goal in mind — to create and distribute the world's most usable Linux. openSUSE also provides the base for Novell's award-winning SUSE Linux Enterprise products. The goals of openSUSE project are to make openSUSE the easiest Linux distribution for anyone to obtain and the most widely used open source platform. Provide an environment for open source collaboration that makes openSUSE the world's best Linux distribution for new and experienced Linux users.
Mandriva Linux is the best way to start using Linux. A full Linux operating system on a single CD for both new and experienced Linux users, it is fast to download and install, and also safe to try with a live mode. Requirements: any Intel or AMD processor, 1Ghz or better - dual-core supported, RAM - 256 MB minimum, 512 MB recommended.
Debian Linuxis a free operating system (OS) for your computer. An operating system is the set of basic programs and utilities that make your computer run. Debian uses the Linux kernel (the core of an operating system), but most of the basic OS tools come from the GNU project; hence the name GNU/Linux. Debian will run on almost all personal computers, including most older models. Each new release of Debian generally supports a larger number of computer architectures. Almost all common hardware is supported.
Elive Linux is a complete operating system for your computer. It's the perfect choice for replacing your proprietary, high-cost system. It is built on top of Debian GNU/Linux and customized to meet your needs for a complete operating system while still offering the user eye-candy, with minimal hardware requirements. It will help to turn your old computer into a high-powered work-station again, with an Interface that dazzles everybody that sees it. Elive comes with a full suite of applications for your work and leisure time needs. It offers everything from a full Office suite to games and Multimedia. You can enjoy watching movies or listening to music. You can even make your own DVD movies with menus, music and it works. Elive gives you the ability to make 3D animations, or compose your own movies in a real-time non-linear video editor from simple videos taken with your video camera. You can edit and manipulate audio and image files for better quality, effects, or design. Elive is a very stable system that will continue to run day after day without problems. The Enlightenment desktop is ultra-fast and perfectly stable with no random errors or surprises. There are no viruses, trojans, spyware, adware, or any similar things. Elive is a secure and serious system.
Austrumiis a business card size (50MB) bootable Live CD Linux distribution. Imagine the ability to boot your favorite Linux distribution whether you are at home, at school or at work... Requirements: CPU - Intel-compatible (pentium2 or later); RAM - at least 128 MB (if 128 Mb or less, then run boot: al nocdcache).
Ubuntu Studio is aimed at the GNU/Linux audio, video and graphic enthusiast as well as professional. It provides a suite of the best open-source applications available for multimedia creation. Completely free to use, modify and redistribute. Your only limitation is your imagination.
JAD Studio is a Linux operating system. It comes with a comprehensive collection of free, open source multimedia software for audio visual content creation. JackLab was created for music production and so it comes with a real-time kernel optimised for low-latency recording, the latest music software such as Ardour and Rosegarden and over 300 plug-ins and effects. JAD gives you the option of using the lightweight yet beautiful looking Enlightenment 17 desktop environment which uses much less RAM than KDE does. JAD 1.0 is based upon OpenSUSE 10.2 and hence packages (programs, also known as binaries) for OS 10.2 should work under JAD 1.0 and vice versa.
Fedora Linuxis a Linux-based operating system that showcases the latest in free and open source software. Fedora is always free for anyone to use, modify, and distribute. It is built by people across the globe who work together as a community: the Fedora Project. The Fedora Project is open and anyone is welcome to join.
Sabayon Linuxis an advanced, scalable and community driven Linux distribution. It tries to provide its users the best and most complete computing experience. Sabayon Linux uses Gentoo Linux as its base. It is (and always will be) 100% compatible with it.
Mepis Linux allows you to test and try the software before you install to your harddrive. It includes the very best business and multimedia programs, and features unique hardware detection and configuration superior to any others. SimplyMEPIS is pre-configured for simplicity and ease of use, you're productive in a matter of minutes, not hours.
Pardusis a very simple Turkish operating system. Pardus comes with internet tools, office suite, multimedia (picture, music, video, etc) players, games and numerous applications populated in a single CD to answer needs of desktop users.
Slackware Linux is a complete 32-bit multitasking "UNIX-like" system. It's currently based around the 2.6 Linux kernel series and the GNU C Library version 2.3.4 (libc6). It contains an easy to use installation program, extensive online documentation, and a menu-driven package system. A full installation gives you the X Window System, C/C++ development environments, Perl, networking utilities, a mail server, a news server, a web server, an ftp server, the GNU Image Manipulation Program, Netscape Communicator, plus many more programs. Slackware Linux can run on 486 systems all the way up to the latest x86 machines (but uses -mcpu=i686 optimization for best performance on i686-class machines like the P3, P4, and Duron/Athlon).
Dynebolic Linux is shaped on the needs of media activists, artists and creatives as a practical tool for multimedia production: you can manipulate and broadcast both sound and video with tools to record, edit, encode and stream, having automatically recognized most device and peripherals: audio, video, TV, network cards, firewire, usb and more. You can employ this operating system without the need to install anything, and if you want to run it from harddisk you just need to copy a directory. It is optimized to run on slower computers, turning them into full media stations: the minimum you need is a pentium1 or k5 PC 64Mb RAM and IDE CD-ROM, or a modded XBOX game console - and if you have more than one, you can easily do clusters.
DreamLinux is a modern and modular Linux system that can be run directly from the CD and optionally be easily installed onto your HD. Dreamlinux comes with a selection of the best applications designed to meet mostly of your daily needs. It is based on Debian and Morphix, which means it takes advantages of their best features and adds its own modern development tools.
Geexbox Linux is a free embedded Linux distribution which aims at turning your computer into a so called HTPC (Home Theater PC) or Media Center. Being a standalone LiveCD-based distribution, it's a ready to boot operating system than works on any Pentium-class x86 computer or PowerPC Macintosh, implying no software requirement. You can even use it on a diskless computer, the whole system being loaded in RAM. The distribution comes with a complete and automatic hardware detection, not requiring any driver to be added. It supports playback of nearly any kind of audio/video and image files and all known codecs and containers are shipped in, allowing playing them through various physical supports, either being CD, DVD, HDD, LAN or Internet. GeeXboX also comes with a complete toolchain that allows developers adding easily extra packages and features but that might also be used to give birth to many dedicated embedded Linux systems.
XBox Linux can be a full desktop computer with mouse and keyboard, a web/email box connected to TV, a server or router or a node in a cluster. You can either dual-boot or use Linux only; in the latter case, you can replace both IDE devices. You can even connect the Xbox to a VGA monitor.
iQunix OS is a Linux operating system based on the popular Ubuntu distribution. It's unique design offers to Ubuntu users and specialist, a "bare-bone" GNOME based, operating system in which nothing is pre-installed.
BSD kernel based OS
Free BSD Unix is an advanced operating system for x86 compatible (including Pentium and Athlon), amd64 compatible (including Opteron, Athlon64, and EM64T), UltraSPARC, IA-64, PC-98 and ARM architectures. It is derived from BSD, the version of UNIX developed at the University of California, Berkeley. It is developed and maintained by a large team of individuals. FreeBSD offers advanced networking, performance, security and compatibility features today which are still missing in other operating systems, even some of the best commercial ones.
PC BSD Unix is a free operating system with ease of use in mind. Like any modern system, you can listen to your favorite music, watch your movies, work with office documents and install your favorite applications with a setup wizard at a click. It offers the stability and security that only a BSD-based operating system can bring, while as the same time providing a comfortable user experience, allowing you to get the most out of your computing time. With PC-BSD you can spend less time working to fix viruses or spyware and instead have the computer work for you.
desktopBSD Unix aims at being a stable and powerful operating system for desktop users. It combines the stability of FreeBSD, the usability and functionality of KDE and the simplicity of specially developed software to provide a system that's easy to use and install. The DesktopBSD Tools are a collection of applications designed to make life easier and more productive. Even inexperienced users can perform administrative tasks such as configuring wireless networks, accessing USB storage devices and installing and upgrading software.
GNU-Darwin Linux aims to be the most free software distribution. It focuses on projects that leverage the unique combination of Darwin and GNU, and help users to enjoy the benefits of software freedom.
Another type OS
ReactOS is an advanced free open source operating system providing a ground-up implementation of a Microsoft Windows XP compatible operating system. ReactOS aims to achieve complete binary compatibility with both applications and device drivers meant for NT and XP operating systems, by using a similar architecture and providing a complete and equivalent public interface. ReactOS has and will continue to incorporating features from newer versions and sometimes even define the state of the art in operating system technology.
Haiku is an open-source operating system currently in development designed from the ground up for desktop computing. Inspired by the BeOS, Haiku aims to provide users of all levels with a personal computing experience that is simple yet powerful, and free of any unnecessary complexities. Haiku is developed mostly by volunteers around the world in their spare time.
AROS Research Operating System is a lightweight, efficient and flexible desktop operating system, designed to help you make the most of your computer. It's an independent, portable and free project, aiming at being compatible with AmigaOS 3.1 at the API level (like Wine, unlike UAE), while improving on it in many areas. The source code is available under an open source license, which allows anyone to freely improve upon it.
Menuet is an Operating System in development for the PC written entirely in 32/64 bit assembly language, and released under the License. It supports 32/64 bit x86 assembly programming for smaller, faster and less resource hungry applications. Menuet has no roots within UNIX or the POSIX standards, nor is it based on any particular operating system. The design goal has been to remove the extra layers between different parts of an OS, which normally complicate programming and create bugs.
Minux3 is a new open-source operating system designed to be highly reliable, flexible, and secure. It is loosely based somewhat on previous versions of MINIX, but is fundamentally different in many key ways. MINIX 3 adds the new goal of being usable as a serious system on resource-limited and embedded computers and for applications requiring high reliability.
Syllable is a reliable and easy-to-use open source operating system for the home and small office user. Syllable is still being developed, but it is already relatively stable and mature, including the following features: booting usually takes less than ten seconds, a full GUI is built into the OS; support for a wide range of common hardware devices, including video, network, and sound cards from manufacturers such as Intel, AMD, 3Com, nVidia, and Creative; Internet access through an Ethernet network (PPP and PPPoE are not fully supported yet, but are available in a test version); a graphical web browser, based on WebKit; an e-mail client (Whisper), and hundreds of other native applications; a journalled file system, modelled on the BeOS file system; and more.
E/OS is a open source emulator of the BeOS, Darwin, DOS, Linux, and Windows API on top of X, allowing run many unmodified programs to run on Linux, FreeBSD, Mac OS X, Solaris, Windows and DOS.
FreeVMS is an OpenVMS-like operating system which can run on several architectures like i386, PPC, Alpha, and many others. It consists of a POSIX kernel and a DCL command line interpreter. The only architectures currently supported are i386 and x86-64.
JNode is a simple to use & install Java operating system for personal use. It runs on modern devices.
BeleniX is an OpenSolaris Distribution with a Live CD (runs directly off the CD). It includes all the features of OpenSolaris and adds a whole variety of open source packages. It can be installed to harddisk as well. BeleniX is free to use, modify and distribute.
FreeDOS aims to be a complete, free, 100% MS-DOS compatible operating system. It is often used with emulators such as Bochs or Linux's dosemu, and it can also run standalone on PCs or embedded systems.
The GNU Hurd is the GNU project's replacement for the Unix kernel. The Hurd is a collection of servers that run on the Mach microkernel to implement file systems, network protocols, file access control, and other features that are implemented by the Unix kernel or similar kernels (such as Linux).
Plan 9 was born in the same lab where Unix began. Underneath, though, lies a new kind of system, organized around communication and naming rather than files and processes. In Plan 9, distributed computing is a central premise, not an evolutionary add-on. The system relies on a uniform protocol to refer to and communicate with objects, whether they be data or processes, and whether or not they live on the same machine or even similar machines. A single paradigm (writing to named places) unifies all kinds of control and interprocess signaling.
SharpOS is a community effort to write an operating system based on .NET technology, with a strong sense of security and manageability. | 1 | 9 |
<urn:uuid:138767db-e457-47ae-9d6d-c000790cae3a> | |Type||Main battle tank|
|Place of origin||USSR|
|In service||1963 – present|
|Used by||Soviet Union, Belarus, Russia, Ukraine, Uzbekistan|
|Designer||Morozov Design Bureau|
|Designed||1951 – 62|
|Produced||1963 – 87|
|Weight||38 tonnes (42 short tons; 37 long tons)|
|Length||9.225 m (30 ft 3.2 in) (gun forward)|
|Width||3.415 m (11 ft 2.4 in)|
|Height||2.172 m (7 ft 1.5 in)|
|Armour||20–450 mm (0.79–18 in) of Glass-reinforced plastic sandwiched between layers of steel.
ERA plates optional
|D-81T 125 mm smoothbore gun|
|7.62 mm PKMT coax machine gun, 12.7 mm NSVT antiaircraft machine gun|
|Engine||5DTF 5-cyl. diesel
700 hp (522 kW)
|Power/weight||18.4 hp/tonne (13.7 kW/ton)|
|500 km (310 mi), 700 km (430 mi) with external tanks|
|Speed||60.5 km/h (37.6 mph) (road)|
The T-64 is a Soviet main battle tank, introduced in the early 1960s. It was a more advanced counterpart to the T-62: the T-64 served tank divisions, while the T-62 supported infantry in motor rifle divisions. Although the T-62 and the famed T-72 would see much wider use and generally more development, it was the T-64 that formed the basis of more modern Soviet tank designs like the T-80.
The T-64 was conceived in Kharkiv, Ukraine as the next-generation main battle tank by Alexander A. Morozov, the designer of the T-54 (which in the meantime would be incrementally improved by Leonid N. Kartsev's Nizhny Tagil bureau, in models T-54A, T-54B, T-55, and T-55A).
A revolutionary feature of the T-64 is the incorporation of an automatic loader for its 125-mm gun, allowing a crewmember's position to be omitted, and helping to keep the size and weight of the tank down. Tank troopers would joke that the designers had finally caught up with their unofficial hymn, "Three Tankers"—the song had been written to commemorate the crewmen fighting in the Battle of Khalkhin Gol, in 3-man BT-5 tanks in 1939.
The T-64 also pioneered other Soviet tank technology: the T-64A model of 1967 introduced the 125-mm smoothbore gun, and the T-64B of 1976 would be able to fire a guided antitank missile through its gun barrel.
The T-64 design was further developed as the gas turbine-powered T-80 main battle tank. The turret of the T-64B would be used in the improved T-80U and T-80UD, and an advanced version of its diesel engine would power T-80UD and T-84 tanks built in Ukraine.
The T-64 would be used only by the Soviet Army and never exported, unlike the T-54/55. It was superior to these tanks in most qualitative terms, until the introduction of the T-72B model in 1985. The tank equipped elite and regular formations in Eastern Europe and elsewhere, the T-64A model being first deployed with East Germany's Group of Soviet Forces in Germany (GSFG) in 1976, and some time later in Hungary's Southern Group of Forces (SFG). By 1981 the improved T-64B began to be deployed in East Germany and later in Hungary. While it was believed that the T-64 was "only" reserved for elite units, it was also used by much lower "non-ready formations", for example, the Odessa Military District's 14th Army.
With the break-up of the Soviet Union in 1991, T-64 tanks remained in the arsenals of constituent republics. Currently, slightly fewer than 2,000 of the old Soviet inventory of T-64 tanks are in service with the military of Ukraine and about 4,000 remain in service with the Russian Ground Forces.
Development history
The initial requirement
Recognizing that the T-55/T-62 lineage had finally exhausted its potential for improvement, the USSR embarked upon the development of an entirely new tank design that could defeat new Western tanks like the British Chieftain and resist new Western anti-tank weapons.
Project 430
Studies for the design of a new battle tank started as early as 1951. The KB-60M team was formed at the Kharkiv design bureau of the Kharkiv transport machine-building factory No. 75 named for Malyshev (Russian: конструкторское бюро Харьковского завода транспортного машиностроения №75 им. Малышева) by engineers coming back from Nizhniy Tagil, with A.A. Morozov at its head. A project named obyekt 430 gave birth to three prototypes which were tested in Kubinka in 1958. Those vehicles showed characteristics which were going to radically change the design of battle tanks on this side of the Iron Curtain. For the first time, an extremely compact opposed-piston engine was used : the 4TD, designed by the plant's engine design team. The transmission system comprised two lateral gears on each side of the engine. Those two innovations yielded a very short engine compartment with the opening located beneath the turret. The engine compartment volume was almost half that of the T-54. The cooling system was extracting and a new lightweight suspension was fitted, featuring hollow metallic wheels of a small diameter and caterpillar tracks with rubber joints.
The tank would keep a D-10TS 100 mm gun and frontal armour of 120 mm. As it did not present a clear superiority in terms of combat characteristics when compared to the T-55 which was entering active service, Morozov decided that production was not yet ready given the project's drawbacks. However, studies conducted on the obyekt 430U, featuring a 122 mm gun and 160 mm of armour, demonstrated that the tank had the potential to fit the firepower and armour of a heavy tank on a medium tank chassis. A new project was consequently started, obyekt 432.
Project 432
The gun fitted on this new tank was a powerful 115 mm D-68 (2A21). This was a potentially risky decision to replace the human loader by an electro-hydraulic automatic system, since the technology was new to Russian designers. The crew was reduced to three, which allowed an important reduction in internal volume, and consequently in weight, from 36 tonnes (obyekt 430) to 30.5 tonnes. The height dropped by 76 mm.
However, the arrival of the British 105 mm L7 gun and the US M-68 variant of it, fitted respectively to the Centurion and M60 tanks, forced the team to undertake another audacious première, with the adoption of a composite armour. The recently created process was called "K combination" by Western armies: this protection consisted of an aluminium alloy layer between two high strength steel layers. As a consequence, the weight of the prototype rose eventually to 34 tonnes. But as the engine was now a 700 hp (515 kW) 5TDF (also locally designed), its mobility remained excellent, far superior to the active T-62. The obyekt 432 was ready in September 1962 and the production started in October 1963 in Kharkiv plant. On December 30, 1966, it entered its service as the T-64.
Even as the first T-64s were rolling off the assembly lines, the design team was working on a new version which would allow it to keep firepower superiority, named obyekt 434. The brand new and very powerful 125 mm D-81T gun, from the Perm weapons factory, was fitted to the tank. This gun was merely a scaled-up version of the 115 mm smoothbore cannon from the T-62. The larger size of the 125 mm ammunition meant less could be carried inside the T-64, and with a fourth crewman loader taking up space as well, the tank would only have a 25-round capacity. This was unacceptably low for the Soviet designers, but strict dimensional parameters forbade them from enlarging the tank to increase interior space. The solution was to replace the human loader with a mechanical autoloader, cutting the crew to three and marking the first use of autoloaders in a Soviet MBT.(Perrett 1987:42) The 6ETs10 autoloader has 28 rounds and can fire 8 shots per minute; the stabiliser, a 2E23, was coupled to the new TPD-2-1 (1G15-1) sight. Night driving was also adapted with the new TPN-1-43A periscope which would benefit from the illumination of a powerful infrared L2G projector, fitted on the left side of the gun. The shielding was improved, with fibreglass replacing the aluminium alloy in the armour, and small spring-mounted plates fitted along the mudguards (known as the Gill skirt), to cover the top of the suspension and the side tanks. They were however extremely fragile and were often removed. Some small storage spaces were created along the turret, with a compartment on the right and three boxes on the front left. Schnorkels were mounted on the rear of the turret. A NBC protection system was fitted and the hatches were widened.
Prototypes were tested in 1966 and 1967 and, as production began after the six hundredth T-64, it entered service in the Soviet Army under the T-64A designation. Chief engineer Alexander Morozov was awarded the Lenin Prize for this model's success.
Designed for elite troops, the T-64A was constantly updated as available equipment was improved. After only three years in service, a first modernisation occurred, regarding :
- fire control, by replacing the sights with the TPD-2-49 day sight with an optical coincidence rangefinder and a TPN-1-49-23 night sight, and stabilisation by mounting a 2E26 system.
- the radio by mounting a R-123M
- night vision with a TBN-4PA for the driver and a TNP-165A for the tank leader. His battlepost was transformed by mounting a small stabilised turret with an anti-aircraft NSVT 12.7 mm x108 machine gun, electrically guided through an optical PZU-5 sight, and fed with 300 rounds. It could be used from within the tank so that the tank leader could avoid being exposed (as on previous tanks). The possibility of mounting a KMT-6 anti-mine system was also added.
A derived version appeared at the same time, designed for the commanding officer, and named T-64AK. It comprised a R-130M radio with a 10 m telescopic antenna which could be used only in a static position as it required shrouds, an artillery aiming circle PAB-2AM and TNA-3 navigation station, all of those could be supplied by an auxiliary gasoline-fired generator.
In 1976, the weapons system was improved by mounting a D-81TM (2A46-1), stabilised by a 2E28M2, supplied by an automatic 6ETs10M. The night sight is replaced by a TNPA-65 and the engine can accept different fuels, including diesel fuel, kerosene or gasoline. The production, first carried on the B variant, stopped in 1980.
But the majority of T-64As were still modernised after 1981, by mounting a six smoke grenade-launcher 81 mm 902A on each side of the gun, and by replacing the gill plates by a rubber skirt for a longer life. Some of them seem to have been fitted after 1985 with reactive bricks (as the T-64BV), or even with laser TPD-K1 telemeters instead of the optical TPD-2-49 optical coincidence rangefinder(1981). Almost all T-64's were modernised into T-64R, between 1977 and 1981, by reorganising external storage and snorkels, similar to the T-64A.
The design team was carrying on its work on new versions. Problems with the setup of the 5TDF engine occurred as the local production capacity was proven to be insufficient against a production done in three factories (Malyshev in Kharkiv, Kirov in Leningrad and Uralvagonzavod).
From 1961, an alternative to the obyekt 432 was studied, with 12 V-cylinder V-45 engine : the obyekt 436. Three prototypes were tested in 1966 in the Chelyabinsk factory. The order to develop a model derived from the 434 with the same engine gave the obyekt 438, later renamed as obyekt 439. Four tanks of this type were built and tested in 1969, which showed the same mobility as the production version, but mass production was not started. They served however as a basis for the design of the T-72 engine compartment.
In the beginning of the 1970s, the design team was trying to improve the tank further. The T-64A-2M study in 1973, with its more powerful engine and its reinforced turret, served as a basis for two projects :
- Obyekt 476 with a 6TD 1000 hp (735 kW) engine which served as a model for the T-80 combat compartment.
- Obyekt 447 which featured a new fire control with a laser telemeter, and which was able to fire missiles through the gun.
For the latter, the order was given to start its production under the name T-64B, as well as a derived version (which shared 95% of its components), the obyekt 437, without the missile guidance system for cost reasons. The latter was almost twice as much produced under the designation T-64B1. On September 3, 1976, the T-64B and the T-64B1 were declared good for the service, featuring the improved D-81Tm gun (2A46-2) with a 2E26M stabiliser, a 6ETs40 loader and a 1A33 fire control, including:
- a 1V517 ballistic calculator
- a 1G21 sight with laser telemetry
- a 1B11 cross-wind sensor.
Its ford capacity reaches 1.8 m without equipment. The T-64B had the ability to fire the new 9M112 "Kobra" radio-guided missile (NATO code "AT-8 Songster"). The vehicle then carries 8 missiles and 28 shells. The missile control system is mounted in front of the tank leader small turret and has many changes. The T-64B1 carries only 37 shells and has 2,000 7.62 mm rounds, against 1,250 for the T-64B.
They were modernised in 1981 by the replacement of the gun by a 2A46M1, the stabiliser by a 2E42, and the mounting of a 902A "Tucha-1" smoke grenade launcher in two groups of four, on each side of the gun. Two command versions are realised, very similar to the T-64AK: the T-64BK and the T-64B1K.
The decision, in October 1979, to start the production of the 6TD engine, and its great similarity with the 5TDF engine, allowed after some study to fit it in versions B and B1, but also A and AK, yielding the new models T-64AM, T-64AKM, T-64BM and T-64BAM, entering service in 1983.
The production ended in 1987 for all versions. The total production has reached almost 13,000.
Modernisations in Ukraine
- T-64BM2, with a 57DFM 850-hp (625 kW) engine, a new 1A43U fire control, a new 6ETs43 loader and the possibility to fire the 9M119 missile (NATO code "AT-11 Sniper").
- T-64U which integrated on top the 1A45 fire control (from the T-80U and T-84), PNK-4SU and TKN-4S optics for the tank commander and PZU-7 for the AA machine gun. The tank leader is then able to drive the tank and to use the gun directly if needed. The tank is also known as BM "Bulat".
The two variants are also protected by Kontakt-5 modular reactive armour, able to resist to kinetic energy projectiles, as opposed to the first models which were efficient only against HEAT shaped-charge ammunition. Those two variants could also be remotorised with the 6TDF 1000 hp (735 kW) engine.
As of October 2010, Ukrainian Army has 47 T-64BM "Bulat" [Т-64БМ "Булат"] in service. In 2010 the Kharkiv Malyshev Factory upgraded 10 T-64B tanks to T-64BM "Bulat" standard, and a further 19 will be delivered in 2011. These 29 tanks are being upgraded under a 200 million Hryvnia ($25.1 million) contract signed in April 2009. The T-64B [Т-64Б] tanks were originally produced at Kharkiv in 1980. According to Constantin Isyak (chief engineer of Malyshev Factory), the T-64BM "Bulat" is armoured to the level of modern tanks. They have 'Knife' [Нiж] reactive armour, and the 'Warta' [Варта] active defence system. The T-64BM "Bulat" weighs 45 tonnes (44 long tons), and with its 850 hp (630 kW) 5TDFM multi-fuel diesel engine can do 70 km/h (43 mph), and has a range of 385 km (239 mi). It retains the 125 mm smoothbore gun with an autoloader for 28 rounds, some of which can be guided missiles. It has a 12.7 mm AA machinegun, and a 7.62 mm coaxial machinegun.
Production history
The T-64 first entered production in 1967, shortly before the T-72. (Serial production begin in 1963. The T-64 formally entered service in army in 1967.) The T-64 was KMDB's high-technology offering, intended to replace the IS-3 and T-10 heavy tanks in independent tank battalions. Meanwhile, the T-72 was intended to supersede the T-55 and T-62 in equipping the bulk of Soviet tank and mechanized forces, and for export partners and east-bloc satellite states.
It introduced a new autoloader, which is still used on all T-64s currently in service, as well as all variants of the T-80 except the Ukrainian T-84-120. The T-64 prototypes had the same 115 mm smoothbore gun as the T-62, the ones put in full-scale production had the 125 mm gun.
While the T-64 was the superior tank, it was more expensive and physically complex, and was produced in smaller numbers. The T-72 is mechanically simpler and easier to service in the field, while it is not as well protected, and its manufacturing process is correspondingly simpler. In light of Soviet doctrine, the relatively small numbers of superior T-64 were kept ready and reserved for the most important mission: a potential outbreak of a war in Europe.
The T-64 was never common in Soviet service, except with those units stationed in East Germany. No T-64s were exported. Many T-64s ended up in Russian and Ukrainian service after the breakup of the Soviet Union.
- Ob'yekt 430 (1957) – Prototype with D-10T 100-mm gun, 120 mm armour, 4TPD 580 hp (427 kW) engine, 36 tonnes.
- Ob'yekt 430U – Project, equipped with a 122-mm gun and 160 mm of armour.
- T-64 or Ob'yekt 432 (1961) – Prototype with a D-68 115-mm gun, then initial production version with the same features, about 600 tanks produced.
- T-64R (remontirniy, rebuilt) or Ob'yekt 432R – Redesigned between 1977 and 1981 with external gear from the T-64A but still with the 115-mm gun.
- T-64A or Ob'yekt 434 – 125-mm gun, “gill” armour skirts, a modified sight, and suspension on the fourth road wheel.
- T-64T (1963) – Experimental version with a GTD-3TL 700 hp (515 kW) gas turbine.
- Ob'yekt 436 – Alternative version of Ob'yekt 432 with a V-45 engine. Three built.
- Ob'yekt 438 and Ob'yekt 439 – Ob'yekt 434 with V-45 diesel engine.
- T-64AK or Ob'yekt 446 (1972) – Command version, with a R-130M radio and its 10 m (33 ft) telescoping antenna, a TNA-3 navigation system, without antiarcraft machine gun, carrying 38 rounds of main gun ammunition.
- Ob'yekt 447 – Prototype of the T-64B. Basically a T-64A fitted with the 9K112 "Kobra" system and a1G21 gunsight . This is the "T-64A" displayed in the Kiev museum.
- T-64B or Ob'yekt 447A (1976) – Fitted with redesigned armour, 1A33 fire control system, 9K112-1 "Kobra" ATGM system (NATO code "AT-8 Songster"), TPN-1-49-23 sight, 2A46-2 gun, 2E26M stabiliser and 6ETs40 loader. Later B/BV models have more modern systems 1A33-1, TPN-3-49, 2E42 and a 2A46M-1 gun. From 1985 the T-64B was fitted with stronger glacis armour; older tanks were upgraded with a 16-mm armour plate. Tanks, equipped with the 1,000 hp 6DT engine are known as T-64BM.
- T-64BV – Features "Kontakt-1" reactive armour and "Tucha" 81-mm smoke grenade launchers on the left of the turret.
- T-64BM2 or Ob'yekt 447AM-2 – "Kontakt-5" reactive armour, rubber protection skirts, 1A43U fire control, 6ETs43 loader and able to fire the 9K119 missile (NATO code "AT-11A Sniper"), 5TDFM 850 hp (625 kW) engine.
- T-64U, BM Bulat, or Ob'yekt 447AM-1 – Ukrainian modernisation, bringing the T-64B to the standard of the T-84. Fitted with "Nozh" reactive armour, 9K120 "Refleks" missile (NATO code "AT-11 Sniper"), 1A45 "Irtysh" fire control, TKN-4S commander's sight, PZU-7 antiaircraft machine-gun sight, TPN-4E "Buran-E" night vision, 6TDF 1,000-hp (735 kW) engine.
- T-64B1 or Ob'yekt 437 – Same as the B without the fire control system, carrying 37 shells.
- T-64B1M – T-64Ba equipped with the 1,000-hp 6DT engine.
- T-64BK and T-64B1K or Ob'yekt 446B – Command versions, with an R-130M radio and its 10-m telescoping antenna, a TNA-3 navigation system and AB-1P/30 APU, without antiaircraft machine gun, carrying 28 shells.
- Obyekt 476 – Five prototypes with the 6TDF engine, prototypes for T-80UD development.
- BREM-64 or Ob'yekt 447T – Armoured recovery vehicle with a light 2.5-tonne crane, dozer blade, tow bars, welding equipment, etc. Only a small number was built.
- T-55-64 – Heavily upgraded T-55 with the complete hull and chassis of the T-64, fitted with "Kontakt-1" ERA. Prototype.
- T-80 and T-84 –further developments of the T-64.
- 1977–1981 – brought to the T-64R standard, reorganisation of external equipment as on the T-64A.
- 1972 redesign, fire control improvement (TPD-2-49 and TPN-1-49-23), inclusion of the NSVT machine gun on an electrical turret, R-123M radio.
- 1975 redesign, new 2E28M stabiliser, 6ETs10M loader, multi-fuel engine, 2A46-1 gun and TNPA-65 night vision.
- 1981 redesign, two sets of six 902A smoke grenade launchers, rubber skirts on the suspension instead of the Gill protection.
- 1983 T-64AM,T-64AKM, some tanks were equipped with the 6TDF engine during maintenance.
- 1981 redesign, 2 sets of four 902B2 smoke grenade launchers, 2A26M1 gun.
- 1983 T-64BM,T-64B1M,T-64BMK and T-64B1MK: some tanks were equipped with the 6TDF engine during maintenance.
- 1985 T-64BV,T-64B1V,T-64BVK and T-64B1VK: with "Kontakt" reactive armour, smoke grenade launchers on the left of the turret.
- BM Bulat – T-64 modernization by the Malyshev Factory in Ukraine (see above).
- BMPV-64 – Heavy infantry fighting vehicle, based on the chassis of the T-64 but with a completely redesigned hull with a single entry hatch in the rear. Armament consists of a remote-controlled 30-mm gun. Combat weight is 34.5 tons. The first prototype was ready in 2005.
- BTRV-64 – Similar APC version.
- UMBP-64 – Modified version that will serve as the basis for several (planned) specialized vehicles, including a fire support vehicle, an ambulance and an air-defence vehicle.
- BMPT-K-64 – This variant is not tracked but has a new suspension with 4 axles, similar to the Soviet BTR series. The vehicle is powered by a 5TDF-A/700 engine and has a combat weight of 17.7 tons. It is fitted with a RCWS and can transport 3+8 men. Prototype only.
- BAT-2 – Fast combat engineering vehicle with the engine, lower hull and "small roadwheels" suspension of the T-64. The 40-ton tractor sports a very large, all axis adjustable V-shaped hydraulic dozer blade at the front, a single soil ripper spike at the rear and a 2-ton crane on the top. The crew compartment holds 8 persons (driver, commander, radio operators plus a five-man sapper squad for dismounted tasks). The highly capable BAT-2 was designed to replace the old T-54/AT-T based BAT-M, but WARPAC allies received only small numbers due to its high price and the old and new vehicles served alongside during the late Cold War.
Service history
The tank remained secret for a long time, the West often confusing it with the less-evolved T-72 tank. The T-64 was never exported, and has seen only limited combat experience—in the campaigns against Chechen separatists.
According to David Isby the T-64 first entered service in 1967 with the 41st Guards Tank Division in the Kiev Military District, the suggestion being that this was prudent due to the proximity of the division to the factory, and significant teething problems during induction into service that required constant presence of factory support personnel with the division during acceptance and initial crew and service personnel training on the new type.
- Transnistria - T-64BVs are in service in unknown numbers by the Dnestr separatists.
- Russia – Around 100 are in reserve and 4,000 are probably in storage.
- Ukraine – 2,345 were in service as of 1995, 2,277 as of 2000 and 2,215 as of 2005. Currently around 600 are in service, 1500+ in storage and over 90 from those that are in active service are modernized to T64 BM Bulat.
- Uzbekistan – 100 in service as of 2013.
Potential operators
- Peru - T-64s offered by Ukraine will be part of comparative tests done by the Peruvian Army to find a replacement for their aging T-55s. Between 120 and 170 tanks may be aquired. The T-64 is competing against the T-90S, M1A1 Abrams, Leopard 2A4 and A6, and the T-84.
Former operators
- Soviet Union – Passed on to successor states.
T-64BV technical information
Capabilities and Limitations
The T-64 did not share many drawbacks with the T-72, even if it is often confused with it:
- The automatic loader, hydraulic and not electric, is much faster (loading cycle of 6 to 13 seconds) and more reliable, and less sensitive to jolting when running off-road. It also has a "sequence" fire mode which feeds the gun with shells of the same type in less than 5 seconds. It is also able, in the modern versions, to turn backwards to keep a good speed at the end of the loading sequence.
- Driving seems much less exhausting for the crew, thanks to assisted controls and a more flexible suspension. (Perrett 1987:43)
- The ammunition is stowed at the lower point of the turret shaft, minimizing the risks of destruction by self-detonation.
- Protection, remains able to stop some types of modern projectiles.
- The fire control on the B version is very modern.
- The tank commander's cupola provides good vision, the antiaircraft machine gun can be operated from inside the turret; the commander can also control the main gun sight if necessary.
Additionally, the adoption of the autoloader was highly controversial for several reasons:
- Early versions of the autoloader lacked safety features and were dangerous to the tank crews (especially the gunner, who sits nearby): Limbs could be easily caught in the machinery, leading to horrible injuries and deaths. A sleeve unknowingly snagged on one of the autoloader's moving parts could also drag a crewman into the apparatus upon firing. (Perrett 1987:42)
- The turret was poorly configured to allow the human crew to manually load the gun should the autoloader break. In such situations, rate of fire usually slowed to an abysmal one round per minute as the gunner fumbles with the awkward task of working around the broken machine to load the gun. (Perrett 1987:42)
- While having smaller tank crews (three vs. the usual four) is advantageous since more tanks can theoretically be fielded using the same number of soldiers, there are also serious downsides. Tanks require frequent maintenance and refueling, and much of this is physically demanding work that several people must work together to accomplish. Most of the time, these duties are also performed at the end of a long day of operations, when everyone in the tank is exhausted. Having one less crewman for these tasks increases the strain on the remaining three men and increases the frequency of botched or skipped maintenance. This problem worsens if the tank's commander is also an officer who must often perform other duties such as higher-level meetings, leaving only two men to attend to the tank. (Perrett 1987:42-43) All of this means that tanks with three-man crews are more likely to suffer from performance-degrading human exhaustion, and mechanical failures that take longer to fix and that keep the tank from reaching the battlefield. These problems are exacerbated during prolonged time periods of operations.
- The T-64 was criticized for being too mechanically complex, which resulted in a high breakdown rate. Problems were worst with the suspension system, which was of an entirely new and advanced design on the tank. Due to these problems, teams of civilian mechanics from the T-64 factories were "semi-permanent residents" of Soviet tank units early. (Perrett 1987:43-44)
- Length (gun to the front): 9.295 m.
- Length (without the gun): 6.54 m.
- Breadth: 3.6 m.
- Height: 2.17 m.
- Weight: 42.4 t.
- Engine: 5DTF multifuel (diesel, kerosene and petrol) with 5 opposed cylinders, 10 piston, 13.6 L. Developing 700 hp (515 kW) at 2,800 rpm, consumption of 170 to 200 litres per 100 km.
- Transmission: two lateral gearboxes with seven forward and one backward gear.
- Three internal tanks for a 740 litres fuel capacity, two on the mudguards with 140 litres and two droppable 200 litres tanks on the aft end of the chassis.
- max. road speed: 60.5 km/h.
- max off-road speed: 35 km/h.
- power-to-weight ratio: 16.2 hp/t (11.9 kW/t).
- range: 500 km, 700 km with additional tanks.
- ground pressure: 0.9 kgf/cm2 (88 kPa, 12.8 psi).
- able to ford in 1.8 m of water without preparation and 5 m with snorkels.
- crosses a 2.8 m wide trench.
- crosses a 0.8 m high obstacle.
- max. slope 30°.
- 125 mm smoothbore 2A46M-1 gun (D-81TM) with carousel 6ETs40 loader, 28 shots, fire rate 8 shots per minute, 36 embedded shots (8 x 9M112M "Kobra" (NATO code "AT-8 Songster"), 28 shells). Available shells are all fin-stabilised:
- anti-personnel (APERS) version of the 3UOF-36, 3OVF-22, with several perforating abilities.
- armour-piercing shells (APFSDS) 3UBM-17 or 3UBM-19 or older ones with a supplementary charge giving them an initial speed of about 1800 m/s.
- hollow charge shells, 3VUK-25 or 3UBK-21.
- coaxial machine gun 7.62 mm PKT with 1,250 rounds.
- remote-controlled air-defence machine gun 12.7 mm NSVT "Utyos" with 300 rounds.
- 4+4 (T-64B) or 6+6 (T-64A) 81 mm smoke mortars 902B "Tucha-2".
- The 1A33 fire control system, with:
- Radio control of the 9K112 "Kobra" missiles (NATO code "AT-8 Songster") launched from the gun.
- The 2E28M hydraulic stabiliser (vertical range -5°20' to +15°15')
- The gunner day sight 1G42 with embedded laser telemeter.
- The TPN-1-49-23 active IR night sight.
- The L2G IR projector left of the gun for illumination.
- The 1V517 ballistic calculator.
- The 1B11 anemometric gauge.
- The tank commander's cupola is equipped with:
- The PKN-4S combined day and night sight which allows a 360° vision and to fire the main weapons.
- The PZU-6 AA sight.
- The 2Z20 2-axis electrical stabiliser (vertical range -3° to +70°).
- The TPN-3-49 or TPN-4 and TVN-4 night vision for the driver.
- A R-173M radio.
- An NBC protection, with radiation detectors and global compartment overpressure.
- Two snorkels for crossing rivers with a depth up to 5 m.
- A KMT-6 mine clearing plough can be fitted at the front.
- 3-layer composite armour (K formula), with a thickness between 450 and 20 mm:
- front: 120 mm steel, 105 mm glass fibre, 40 mm steel.
- sides: 80 mm steel.
- front of the turret: 150 mm steel, 150 mm glass fibre, 40 mm steel
- lateral rubber skirts protecting the top of the suspension.
- Kontakt-1 reactive bricks covering:
- the front and the side of the turret
- the glacis
- the lateral skirts
See also
|Wikimedia Commons has media related to: T-64 tanks|
Tanks of comparable role, performance and era
- Chieftain tank : Approximate British equivalent
- T-64A Main Battle Tank at KMDB.
- Три танкиста (Three Tankers)
- Perrett 1987:42
- http://www.meshwar.vistcom.ru/tech/t-64.htm Main battle tank T-64 (Основной боевой танк Т-64)
- wknews.ru Украинская армия получила десять модернизированных Т-64, 28 October 2010
- Kharkiv Morozov Machine Building Design Bureau Main Characteristics of the Upgraded BM Bulat Battle Tank
- Sewell, Stephen, CW2 (rtd). "Why Three Tanks?" (Armor, July–August 1998), p.45.
- Т-64: Чи піде «під ніж» унікальна техніка? (T-64: Will Unique Technology go "Under the Knife"?) at Військо України (Ukrainian Army)
- p.13, Isby, per "Victor Suvorov"
- T-64 MBT at Warfare.ru
- Ground Forces Equipment - Ukraine
- Uzbek-Army Equipment
- Peruvian Tank Contenders - Army-Technology.com, May 17, 2013
- Isby, D.C. (1988). Ten million bayonets: inside the armies of the Soviet Union, Arms and Armour Press, London. ISBN 978-0-85368-774-0
- Perrett, Bryan (1987). Soviet Armour Since 1945. London: Blandford Press. ISBN 0-7137-1735-1.
- Saenko, M., V. Chobitok (2002). Osnovnoj boevoj tank T-64, Moscow: Eksprint. ISBN 5-94038-022-0.
- Sewell, Stephen ‘Cookie’ (1998). “Why Three Tanks?” in Armor vol. 108, no. 4, p. 21. Fort Knox, KY: US Army Armor Center. ISSN 0004-2420. (PDF format)
- Zaloga, Steven (1992), T-64 and T-80, Hong Kong: Concord, ISBN 962-361-031-9.
- BM Bulat Main Battle Tank, Ukraine
- Ukrspets on T-64 upgrades
- Kampfpanzer T-64 (German language)
- T-64 and Bulat at KMDB (manufacturer's site) | 1 | 7 |
<urn:uuid:b851249c-3bef-4416-863c-2017a485e7e2> | Volume 21, Number 9
March to War
Death to Afghanistan
Dying to Live
Wreck and Ruin
Laurence h. Shoup
Global Food Crisis
Fannie & Freddie
There are no articles.
NOTE: Z Magazine subscribers and sustainers have access to all Z Magazine articles here and in the archive. The latest Z Magazine articles available to everyone are listed in the Free Articles box at the top of the table of contents, and are starred in the list below. Questions? e-mail Z Magazine Online.
Fannie Mae, Freddie Mac
Phase two of financial crisis
Despite repeated efforts during the past year by the Federal Reserve (Fed), the U.S. Treasury, regulatory agencies, and global central banks, the current financial crisis has not been contained—let alone resolved. In the months since the Bear Stearns investment bank bailout in March 2008, the instability of the U.S. financial system has continued to deteriorate.
This past July, a second major financial instability event occurred—the near collapse of the two quasi-government housing market agencies, Fannie Mae and Freddie Mac. Like Bear Stearns, the collapse of Fannie/Freddie was barely averted by an announcement from Fed Chair Ben Bernanke and U.S. Treasury Secretary Henry Paulson that the government would bail out the two agencies. Unlike Bear Stearns, the bailout requires Congressional action and its estimated cost may be as high as $300 billion, according to independent estimates. That's nearly ten times the $29 billion bailout cost of Bear Stearns.
Evolution of the Fannie/Freddie Collapse
It is often noted in the business press that Fannie/ Freddie are liable for more than $5.3 trillion of total mortgage debt outstanding, or roughly half of total mortgage debt in the U.S. which totaled $10.6 trillion as of March, according to the U.S. Fed's Flow of Funds data. Less often noted is that since 2001 this same mortgage debt had ballooned from $4.8 trillion to $10.6 trillion. That's an increase of approximately $6 trillion in just 7 years. Between 2003-06 alone mortgage debt recorded a net growth of more than $1 trillion a year. Nearly half of that was "bad" subprime mortgage debt.
While banks and mortgage companies reaped super-profits from multi-trillion dollar mortgage lending during 2003-2006, they simultaneously pushed for the full privatization of Fannie/Freddie. They were not able to achieve that, but were able to keep Fannie/Freddie underfunded and freeze the latter's share of mortgages purchased at no more than 40 percent of the total mortgage market.
But once the subprime mortgage bust began in late 2006, the same banking and mortgage lenders sought to have Fannie/Freddie buy up their bad debt in greater and greater volumes, including their so-called securitized packages of bad subprime mortgages. Fannie/Freddie's bad debt load quickly accelerated. Its share of the mortgage debt market in turn rose rapidly, from less than 40 percent in 2005 to more than 70 percent. By the first quarter of 2008, more than 80 percent of all mortgages issued were purchased or guaranteed by Fannie/Freddie.
Fannie/Freddie's debt load rose quickly, but their funding and reserves on hand to cover the debt did not. By summer 2008 the agencies found themselves with more than $5 trillion in liabilities, of which $1.7 trillion was direct debt, with only $81 billion in reserves.
Created in 1938 to rescue homeowners and mortgages from a similar bout of banking speculation gone bust, in 1968 Fannie/Freddie were partly privatized, which means they were no longer purely government agencies, but became corporations in which private investors purchased stock. Their private investors range from other financial institutions, wealthy independent investors, private equity funds, hedge funds, pension funds, and various foreign banks and institutions. Fannie/Freddie's directly liable debt (called agency debt) of $1.7 trillion (of the total $5.3 trillion) is heavily owned by foreign central banks.
Should Fannie/Freddie collapse, their investors would suffer major financial losses as well as possible defaults and bankruptcies of their own. Foreign central banks would be especially hard hit. That would mean spreading and deepening the financial crisis further, not only in the U.S. but globally.
Investors became concerned that the two agencies could not cover their multi-trillion dollar liabilities with their miniscule liquid funds on hand. With housing prices continuing to fall and foreclosures rising, in early July analysts estimated Fannie/Freddie losses over the coming year of $100 to $300 billion depending on how far housing prices might fall. Investors did what most investors in any company do in such circumstances—they began a wholesale dumping of their Fannie/Freddie stock. The break came on Friday, July 11, as Fannie/Freddie stock prices plummeted by 50 percent.
Amazingly, even though Fannie/Freddie's reserves had been declining throughout the current financial crisis, particularly in the first half of 2008, instead of taking action, U.S. government regulators repeatedly eased the amount of funds the agencies were required to keep on hand to cover emergencies such as that which occurred in July. From 30 percent at the outset of 2008, regulators reduced required reserves to 20 percent and then to 15 percent, with talk of a further cut to 10 percent in September 2008 on the agenda.
An opportunity to do something about the situation arose in mid-May, in the form of proposed housing assistance legislation for homeowners facing foreclosure. But Congress did nothing. Instead, it accepted promises that Fannie/Freddie would voluntarily raise capital to add to their reserves. Even tepid proposals to change the agencies' regulators—a kind of rearranging of deck chairs on the Titanic—failed to pass Congress last spring.
In the days leading up to July 11, government regulators, the Fed, and the Treasury repeatedly proclaimed that the two agencies had sufficient capital, were voluntarily raising more, and that no rescue of the agencies was necessary. Of course, Paulson/Bernanke never bothered to explain how companies with such a collapse in stock prices might be able to raise capital and thus avert the crisis. Even before the near collapse, both agencies were jointly able to raise only $20 billion. By the end of the week of July 7-11, Fannie Mae's stock price was down 76 percent over the previous year, and Freddie's had fallen 83 percent.
Over the weekend of July 12-13, Paulson/Bernanke and regulators did another about-face. When markets opened on Monday, July 14, they announced a plan to guarantee a government bailout of Fannie/Freddie. The plan required neither the Fed nor the Treasury to directly fund the bailout (neither had sufficient funds, in any event). Instead, Congress would be asked to provide the bailout funding. Until such funds were forthcoming, however, the Fed would provide interim emergency loans to the two agencies. Paulson also proposed the U.S. Treasury buy the two agencies' public stock, thus propping up their stock prices, if necessary.
The moribund housing bill, which had earlier failed to pass Congress, was quickly resurrected following July 14. The Bush administration also proposed to repopulate the boards of directors of the two agencies with more Wall Street bankers. And Paulson/Bernanke made public assurances that "all necessary lines of credit" would be open to the agencies until Congress provided more substantial and permanent funding. The Congressional bill that eventually passed in late July ultimately provided for a $300 billion line of credit—just about what is projected by analysts as the total losses of the two companies over the next period. The $300 billion is to be disbursed by the Treasury to Fannie/Freddie as needed, either as loans or as government direct purchases of the companies' stock.
Despite the bailout announcement, a further general fall in the New York stock market and a further crisis in confidence in the banking system and financial institutions followed. The California bank, IndyMac, failed soon after, and stock prices of other regionals like WaMu, National City, and others significantly declined. The Standard & Poor's 500 bank stock index suffered its worst decline since 1989. Many banks and mortgage lending companies now teeter on the edge of default and bankruptcy.
The Strategic Significance of Fannie/Freddie
The near collapse and proposed bailout of Fannie/Freddie represents several important developments in the current financial crisis. First, it means the current financial crisis has not been stabilized, but has actually gotten worse. According to Bill Gross, manager of PIMCO, the world's biggest bond fund, falling U.S. home prices will require financial institutions to write off more than $1 trillion in losses. The two to three million projected foreclosures may even be larger, given the estimated 25 million whose homes are now in negative equity (worth less than the purchased price) and the remaining mortgage costs.
It also means that the Fed, is no longer able to deal with the crisis on its own—as it essentially did with Bear Stearns. It has now clearly passed the buck to Congress. How much more will the bailout cost? According to a July Standard & Poor's estimate, the cost would run somewhere between $420 billion and $1.1 trillion. That compares to the last housing market bailout that occurred in the late 1980s with the Savings & Loan debacle of around $250 billion.
To date the Fed has committed more than $400 billion of its roughly $800 billion to bail out Bear Stearns and prevent the collapse of other banks in the U.S. and abroad. In July it extended prior deadlines to provide special funding to banks and financial institutions still in trouble into 2009 and it will no doubt have to extend that guarantee as the crisis deepens.
The Fed has also lowered interest rates as far as it believes it can—to 2 percent. Its policy of engineering short term interest rate reductions has clearly failed. Lowering rates has not generated a recovery in the real economy or even assisted bank lending much. Banks continue to be reluctant to lend to each other, let alone to other non bank businesses or homeowners. All that lower Fed interest rates have accomplished is to fuel the devaluation of the dollar, feed currency speculators preying upon that devaluation, raise all types of commodity prices in the U.S., and in effect export part of the U.S. slowdown to other economies.
This shifting of the burden for bailing out the financial system to the U.S. Treasury and Congress signals an important shift in capitalist financial strategies for dealing with the crisis. It means monetary (Fed) solutions to the financial crisis have been effectively put on hold. Fannie/Freddie thus represents the shift into the second phase of financial crisis, while Bear Stearns represented the end of the first phase of the crisis.
They also represent a strategic crossroads. It is clear the Fannie/Freddie bailout is inevitable so long as housing prices continue to drop, which they will, foreclosures continue to rise, and housing market losses continue to grow. It remains to be seen how successful a proposed future bailout by Congress and the Treasury will be in stabilizing the two agencies. Should Fannie/Freddie fail to raise at least another $100 billion in capital or should their stock prices continue to decline, the two agencies might be forced to sell assets at fire sale prices. This very same development began occurring at the end of July among investment and commercial banks also unable to raise capital to cover losses. The hybrid giant bank, Merrill Lynch, began fire sales of its assets at the end of the month, dumping $31 billion in bad loans and mortgages for only $7 billion—a move almost certain to be copied by other banks. Fannie/Freddie might not be able to avoid similar action.
Fannie/Freddie also marked the entry of the New York stock markets into clear bear territory. Stock prices have now crossed a threshold. Despite occasional recoveries, they will proceed to decline further. In even typical postwar recessions, stock prices have fallen 30-40 percent. The current recession is anything but normal, so stock prices can be expected to fall at least as far.
Perhaps one of the more important strategic representations of Fannie/Freddie, and one of the least understood, are their tie-ins to the global derivatives markets. There are three critical numbers associated with Fannie/Freddie. First is their total liability of more than $5.3 trillion in mortgage debt. Second is their combined direct so-called agency debt of $1.7 trillion (which is part of that $5.3 trillion total). Third is the more than $2 trillion in derivatives they own, which were taken on to hedge their risks in their mortgage portfolio. The derivatives positions connect them to countless (and mostly unknown) global financial institutions. Were Fannie/Freddie to default or go bankrupt on their direct agency debt, the global impact via the derivatives market would be enormous. The magnitude of the current financial crisis would grow several-fold.
Nationalize the Housing Market?
At this point in the crisis, bailout means the government must get more deeply involved in funding residential housing markets than ever before. Of course, banks and financial institutions don't like that idea at all. That dislike is what lay behind their growing opposition to Fannie/Freddie during the boom times of 2002-06 and their drive to fully privatize the agencies at that time. But privatization is now clearly off the table, while bailout and deeper regulation are on, which raises the fundamental question: should private lenders be involved at all in financing the housing market or should the housing markets in effect be nationalized?
Of course, nationalizing is a stopgap measure designed to temporarily refloat the markets and institutions at taxpayer expense. After they are again financially stable, the idea is to sell them back again to private interests after they can make a profit once more. It's the basic capitalist strategy to "socialize the costs" and "privatize the gains," which is the essential capitalist definition of nationalization. Even Wall Street Journal editorials now advocate that particular definition, arguing the current arrangements represent a "dishonest kind of socialism." Instead, they propose nationalizing Fannie/Freddie in "a more honest form of socialism." That formula for nationalization is essentially what ex-Fed Chair Alan Greenspan recently proposed: take them over, re-stabilize them at direct taxpayer expense, then spin them back into six or seven private finance companies again.
Despite Greenspan and business press pundits raising the capitalist version of nationalization, the idea of a different kind of nationalization is now possible, as it becomes increasingly clear that private financial institutions are the core cause of the crisis and that the "only game in town" to keep things going is direct government control of the housing markets.
Fannie/Freddie, Deflation, and Epic Recession
Financial instability in the U.S. has continued to worsen, not improve. The key question is why was there a second financial blow up with the near collapse of Fannie Mae and Freddie Mac? The answer to that question is the rampant speculative investing that has been plaguing the U.S. economy for some time.
Since the 1980s, speculative investment has been growing in both its weight and mix as a percentage of total investment in the economy. Speculative investment feeds off of, and simultaneously drives up, prices for financial and other assets. The most notable current example is what is now happening with commodity price inflation. But before commodity speculation and inflation, it was housing price inflation and the subprime bubble of 2002-06; before that, the technology stock speculation and price bubble of 1998-2000; and before that, other asset price bubbles in the 1980s and 1990s. Speculative asset price bubbles lead inevitably to speculative asset price busts—i.e., deflation.
Following the July bailout, the troubles at Fannie/Freddie continued to worsen. In August both agencies announced combined additional losses of more than $7 billion, which immediately drove their stock prices to historic lows. Continuing losses and collapsed stock prices will undermine raising sufficient capital to offset expected future losses. The $300 billion bailout may therefore not be enough.
What's been happening with banks and financial institutions in the U.S. over the past year is that housing and other asset prices have continued to fall faster than banks have been able to raise cash and funds from other sources to offset those losses. The bailout of Fannie/Freddie has not resolved in any way the more fundamental housing crisis. Housing prices will continue to fall at least another 20 percent. Foreclosures will continue to rise by the millions for some time, driving the housing price decline. The recent Housing Bill passed by Congress in July addresses only one-tenth of the eventual foreclosures. The bill's $300 billion set-aside to cover Fannie/Freddie losses provides less than a third of total estimated mortgage losses from all sources. In short, the bailout has only temporarily staunched the bleeding—at significant taxpayer expense.
More and more companies now face rising costs due to commodity price inflation and simultaneous falling revenues. Their inevitable response will be mass layoffs, which are coming in 2008 and into 2009. The entire process leads to something called epic recession—a recession that is unlike any previous recession and that is global in character (see Z Magazine, June 2008). Already numerous economies have begun following the U.S. into recession. The UK, Ireland, Spain, Italy, Portugal, and New Zealand have all clearly tipped into recession. Two of the world's other top four economies, Japan and Germany, have joined the downturn as well. The contraction has clearly begun to synchronize globally.
The ultimate driver of the entire process is the unwinding of the excess $21.6 trillion in net new debt added to the U.S. economy since 2001 and the deflation that debt unwinding is now causing. Behind the debt-deflation dynamic, however, lies the growing imbalance of speculative investment in the U.S. that has been building for decades and the even more fundamental causes that have been driving that speculation in turn.
Jack Rasmus's writing on the current financial and economic crisis and related topics is available at www.kyklosproductions.com.
Z Magazine Archive
AnnouncementsLABOR - May 1 is May Day. Workers of the world will celebrate the 124th anniversary of International Worker’s Day. Born out of a call for an 8-hour workday in the United States, this day is an opportunity for all workers to show their solidarity with one another, as well as to renew the call for labor rights.
FARM CONFERENCE - The Farm Conference on Community and Sustainability will be held May 24-26 in Summertown, TN, in partnership with the Fellowship of Intentional Communities. Tour green homes, see sustainable food production, learn about solar installations, alternative education, midwifery, and more.
Contact: [email protected]; http://www.thefarmcommunity.com/.
PALESTINE - The Conference of the Palestinian Shatat in North American will be held June 3-5 in Vancouver. The conference will examine the future of the Palestinian liberation movement.
Contact: [email protected]; http://www.palestinianconference.org/.
LABOR - The Pacific Northwest Labor History Association’s 45th annual conference will be held May 3-5, in Portland, OR. This year’s theme is Labor Under Attack: Learning from the Past and Preparing for the Future. A call for presentations, workshops and papers is currently underway.
Contact: PNLHA, 27920 68th Ave. East, Graham, WA 98338; 206-406-2604; [email protected]; http://www3.telus.net.
MARIJUANA - On the first Saturday of May marijuana legalization activists will hold informational and educational events, rallies and marches in over 300 cities around the world.
ECONOMICS - The Union For Radical Political Economics will hold its 39th annual conference May 9-11 in New York City.
RECLAIM THE DREAM - The 2013 Poor People’s Campaign & March from Baltimore to Washington D.C. will be May 11. Communities, schools and unions interested in participating are encouraged to contact the Baltimore People’s Assembly.
Contact: 410-500-2168; 410-218-4835; [email protected]; Southern Christian Leadership Conference of Baltimore and the Baltimore Peoples Power Assembly, 2011 N. Charles St., Baltimore, MD 21218.
MOTHER’S DAY - The 17th Annual Mother’s Day Walk For Peace will be May 12th, in Dorchester, MA. The walk began in 1996 for families who had lost children to violence. The day has become a way for thousands of people to financially support the work of the Louis Brown Peace Institute.
Contact: http://www.ldbpeaceinstitute.org/; http://mothersdaywalk4peace.org/.
NATO 5 - An International Week of Solidarity with the NATO 5 has been called for May 16-21. Supports call on supporters to raise awareness of the NATO 5 and support funds for the defendants on the one-year anniversary of their preemptive arrests.
Contact: [email protected]; https://nato5support.wordpress.com.
MOUNTAINTOP - The 2013 Mountain Justice Summer Activist Training Camp will be held May 19-27 in Damascus, VA. It will be a week of workshops, field trips to view Mountain Top Removal coal mines, direct actions, and service project.
FEMINIST SCI-FI - The feminist science fiction convention WisCon 37 is scheduled for May 24-27 in Madison, WI.
Contact: WisCon, ? SF3, PO Box 1624, Madison, WI 53701; [email protected]; http://www.wiscon.info/.
ANARCHY FEST - A month-long Festival of Anarchy is scheduled for May in Montreal. The festival includes The Montreal Anarchist Bookfair (May 19-20).
Contact: http://www.anarchistbookfair.ca/; http://www.radicalmontreal.com/.
LABOR - The International Labor Rights Forum will present: Down the Supply Chain, Driving Corporate Accountability, on May 22 in Washington, DC. The Labor Rights Awards Ceremony and Reception will honor pioneers in supply chain worker organizing, working solidarity and international labor rights policy.
MULTICULTURE - The 26th annual National Conference on Race & Ethnicity in American Higher Education (NCORE) will take place May 28-June 1, in New Orleans.
Contact: SWCHRS, 3200 Marshall Avenue, Suite 290, Norman, OK 73072; 405-325-3694; [email protected]; www.ncore.ou.edu.
MEDIA - The 2013 Alliance for Community Media Annual Conference will be held May 29-31, in San Francisco, CA. Participants will include educators, community leaders, media professionals, journalists, nonprofit leaders, policymakers and students.
RADIO - The 38th Annual Community Radio Conference is schedule for May 29-June 1, in San Francisco, CA, with discussions and workshops.
Contact: 1101 Pennsylvania Ave. NW, Suite 600, Washington, DC 20004; 202-756-2268; [email protected]; http://www.nfcb.org/.
BRADLEY MANNING - On June 1, a rally will be held at Fort Meade in support of Bradley Manning.
BIKES - Bikes Not Bombs is holding its 24th annual Bike-A-Thon and Green Roots Festival in Boston, MA on June 3, with several bike rides scheduled, music, exhibitors and more.
Contact: Bikes Not Bombs, 284 Amory St., Jamaica Plain, MA 02130; 617-522-0222; [email protected]; www.bikesnotbombs.org.
LEFT FORUM - The 2013 Left Forum will be held June 7-9, at Pace University in New York City.
Contact: 365 Fifth Avenue, CUNY Graduated Center, ? Sociology Dept., New York, NY 10016; http://www.leftforum.org/.
VEGAN FEST - Mad City Vegan Fest will be held in Madison, WI, June 8. The annual event features food, speakers, and exhibitors.
Contact: 122 State Street, Suite 405 B, Madison, WI 53701; [email protected]; http://veganfest.org/.
ADC CONFERENCE - The American-Arab Anti-Discrimination Committee (ADC) holds its annual conference June 13-16, in Washington, DC, with panel discussions and workshops on civil rights, media and other topics.
Contact: 1990 M Street, Suite 610, Washington, DC, 20036; 202-244-2990; [email protected] http://convention.adc.org/.
CUBA/SOCIALISM - A Cuban-North American Dialog on Socialist Renewal and Global Capitalist Crisis will be held in Havana, Cuba, June 16-30. There will be a 5 day Seminar at University of Havana, plus visits to a cooperative, urban garden, community development project, social research centers, and educational & medical institutions.
Contact: [email protected]; http://www.globaljusticecenter.org/.
NETROOTS - The 8th Annual Netroots Nation conference will take place June 20-23 in San Jose, CA. The event features panels, trainings, networking, screenings, and keynotes.
Contact: 164 Robles Way, #276, Vallejo, CA 94591; [email protected]; http://www.netrootsnation.org/.
MEDIA - The 15th annual Allied Media Conference will be held June 20-23, in Detroit.
Contact: 4126 Third Street, Detroit, MI 48201; http://alliedmedia.org/.
GRASSROOTS - The United We Stand Festival will be hosted by Free & Equal, June 22 in Little Rock, Arkansas. The festival aims to reform the electoral process throughout the U.S.
SOCIALISM - The Socialism 2013 Conference is scheduled for June 27-30 in Chicago, featuring talks and panel discussions.
Contact: [email protected]; http://www.socialismconference.org.
LITERACY - The National Association for Media Literacy Education (NAMLE) will hold its conference July 12-13 in Los Angeles under the heading, Intersections: Teaching and Learning Across Media.
Contact: 10 Laurel Hill Drive, Cherry Hill, NJ 08003; http://namle.net/conference/.
IWW - The North American Work People’s College will take place July 12-16 at Mesaba Co-op Park in northern Minnesota. The event will bring together Wobblies from branches across the continent to learn new skills and build One Big Union.
PEACESTOCK - On July 13th, the 11th Annual Peacestock: A Gathering for Peace, will take place at Windbeam Farm in Hager City, WI. The event is a mixture of music, speakers and community for peace. Sponsored by Veterans for Peace.
Contact: Bill Habedank, 1913 Grandview Ave., Red Wing, MN 55066; 651-388-7733; [email protected]; http://www.peacestockvfp.org.
CHILDREN’S DEFENSE - July 15-19, join clergy, seminarians, Christian educators, young adult leaders and other faith-based advocates for children at CDF Haley Farm in Clinton, Tennessee, for five days of spiritual renewal, networking, movement building workshops, and continuing education about the urgent needs of children at the 19th annual Proctor Institute for Child Advocacy Ministry.
Contact: [email protected]; http://www.childrensdefense.org.
ACTIVIST CAMP - Youth Empowered Action (YEA) Camp will have sessions in July and August in Ben Lomond, CA; Portland, OR; Charlton, MA. YEA Camp is designed for activists 12-17 years old who want to make a difference in the world.
Contact: [email protected]; http://yeacamp.org/.
LA RAZA - The annual National Council of La Raza (NCLR) Conference is scheduled for July 18-19 in New Orleans, with workshops, presentations and panel discussions.
Contact: NCLR Headquarters Office, Raul Yzaguirre Building, 1126 16th Street, NW, Washington, DC 20036; 202-785-1670; www.nclr.org.
LABOR - The Eastern Conference For Workplace Democracy: Growing Our Cooperatives, Growing Our Communities, will be held at Drexel University in Philadelphia, PA, July 26-28.
Contact: [email protected]; http://east.usworker.coop/.
WOMEN/LYNNE STEWART- Radical Women is asking for support letters and cards to be sent to Lynne Stewart. Stewart is a civil rights attorney and political prisoner who is currently in jail. She has breast cancer and authorities have denied her request for transfer from her Texas prison to the New York City hospital where she received medical attention during a prior bout of breast cancer. Send messages and cards to: Lynne Stewart 53504-054, Federal Medical Center Carswell, P.O. Box 27137, Fort Worth, TX 76127.
Contact: 747 Polk Street, San Francisco, CA 94109; 415-864-1278; [email protected]; http://lynnestewart.org/; http://www.radicalwomen.org/.
HAITI/WOMEN - Haiti’s government is considering a legal reform measure that would prohibit and punish all sexual assault, including marital rape. MADRE and the International Campaign to Stop Rape & Gender Violence in Conflict are launching a petition to raise international support for this push to address violence against women in Haiti.
Contact: 121 West 27th Street, #301, New York, NY 10001; 212-627-0444; [email protected]; http://www.madre.org.
SYRIA/MIDDLE EAST - The Middle East Children’s Alliance (MECA) is currently seeking funds to assist more than 200,000 refugees fleeing violence in Syria.
FOLK FESTIVAL - The Falcon Ridge Folk Festival will be held August 2-4, in the Berkshires, NY.
Contact: http://www.falconridgefolk.com/; [email protected].
WAR RESISTERS - The War Resisters League will hold its 90th anniversary conference, Revolutionary Nonviolence: Building Bridges Across Generations and Communities, August 1-4, at Georgetown University. The event will focus on the U.S.’ long history of antimilitarism.
Contact: 339 Lafayette Street, New York, NY 10012; 212-228-0450; [email protected]; http://www.warresisters.org.
POPULAR ECONOMICS - The Center for Popular Economics is holding its 2013 Summer Institute August 4-9 at Hampshire College in Amherst, MA. No background in economics is needed for this intensive training. This year’s theme is, The Care Economy: Building a Just Economy with a Heart.
Contact: Center for Popular Economics, PO Box 785 Amherst, MA 01004; 413-545-0743; [email protected]; www.populareconomics.org.
VETERANS - Veterans for Peace is holding the 28th annual convention August 6-11 in Madison, WI. This year’s theme is, Power To The Peaceful.
DEMOCRACY - The Democracy Convention will take place August 7-11 in Madison, WI. The convention brings together nine conferences including topics such as media, education, defense, race, environment and others.
MEN - The 38th National Conference on Men & Masculinity: Forging Justice: Creating Safe, Equal and Accountable Communities, presented in partnership with HAVEN, will be held in Detroit, MI, August 8-10.
Contact: [email protected]; http://www.nomas.org/.
OCCUPY - An Occupy National Gathering will be held in Kalamazoo, MI, August 21-25.
Contact: [email protected]; http://occupynationalgathering.net/.
COMMUNITIES - The Communities Conference is a networking and learning opportunity for co-operative or communal lifestyles, with workshops, events and entertainment; scheduled for August 30-September 2 at the Twin Oaks Community in Louisa, Virginia.
LABOR DAY - The 29th annual Bread and Roses Festival, a celebration of the ethnic diversity and labor history of Lawrence, MA, will be held September 2, in honor of the 1912 Bread and Roses Strike. There will be music, dance, poetry, drama, ethnic food, historical demonstrations, walking & trolley tours.
Contact: PO Box 1137, Lawrence, MA 01842; 978-794-1655; http://www.breadandrosesheritage.org/.
OCCUPY WALL STREET - September 17 is the two-year anniversary of the Occupy Wall Street movement. Events are planned in New York City and worldwide.
TEACHERS - The 13th Annual Conference, “Teaching for Social Justice: The Politics of Pedagogy,” will be held October 12 in San Francisco, CA. The free event features workshops, resources, and free childcare.
Contact: 415-676-7844; [email protected]; http://www.t4sj.org/.
HAITI - International Action, which brings clean water and chlorinators to Haiti, seeks office space capable of housing up to six people and their office equipment.
Contact: Zach Bremer, [email protected]; 202-488-0735; http://www.haitiwater.org/.
MEDIA - The Union for Democratic Communications and Project Censored are sponsoring a joint conference on media democracy, media activism and social justice to be held November 1-3 at the University of San Francisco. Proposals for presentations, workshops and panels from activists and critical scholars are invited. | 1 | 27 |
<urn:uuid:ded38bfb-c5cf-43de-b948-bf3261497fa2> | - SPECIAL REPORTS
- THE MAGAZINE
In 2003, an era ended. The technology giant Hewlett-Packard Company, Palo Alto, Calif., announced the end of production of the HP 48, the scientific calculator/computer used by numerous surveyors-and which surveyors had ranked above most other devices in its class. What was even more upsetting for surveyors was that HP didn't announce a product for replacement. For three decades Hewlett-Packard had been a standard for scientific calculations in a number of disciplines. The unit was well-suited for a surveyor's fieldwork and was a particularly applicable tool for solving problems in the field and office for many surveying firms.
From HP's development of calculators, businesses were founded and products were created that specially served the needs of surveyors. The calculator line provided a new starting point for enterprising inventors and developers to market field computing products to surveyors.
In the BeginningThe HP brand has been one of the gold standards of manufacturers' names associated with surveying. This began in 1972 with the introduction of the HP 35, the first electronic handheld calculator that enabled trigonometric and logarithmic functions as fast as one could press the keys, in addition to the more basic things like taking square roots (or any root for that matter). Before the HP 35 (whose name came about because it had 35 keys), handheld electronic calculators mostly did the four basic algebraic functions, and the "scientific" ones squared numbers and did square roots. Desktop units had been produced by HP since 1968, but the HP 35 was introduced because Bill Hewlett, the CEO of HP in the early '70s, believed market studies that showed little demand for a handheld scientific calculator to be wrong.
Until the release of the HP 35, surveyors, unless they spent big bucks for electronic or mechanical calculators, used trig and log tables to do most of their calculations. But even more amazing than removing the tedious table lookup from the calculations was the fact that the HP 35 allowed surveyors to make calculations in the field that they had only dreamed of until that point. The 1970s and 1980s also saw a brief spurt of activity at HP in the development of software, EDMs (HP 3800 series) and total stations, culminating in the HP-3820 (the inkjet printer later developed wasn't the first time HP used that number). In fact the term "total station" comes from HP's name for their products that integrated angle and distance instruments.
The HP 35 was quickly followed in 1973 by the HP 45. It introduced the ability to work in degrees and grads, not just radians. The HP 45 also introduced polar-to-rectangular conversions and the now ubiquitous H.MS key that allows simple conversion of sexagesimal numbers (degrees-minutes-seconds) to and from decimal degrees. With nine memory registers and the summation function, surveyors using the HP 45 could accumulate latitudes and departures without actually writing them down. Averages and standard deviations could also be calculated. Surveyors didn't think it could get any better.
The HP 41 was the company's first alphanumeric calculator. It started to blur the distinction between calculator and computer. The unit had the capability to create user functions and to rename (or more correctly, re-position) existing functions to other keys. For instance, a user could take all the keys in the left-hand column, which included the +, -, x and division keys, and assign them to other keys. The HP 41 also had a greater expanded memory capability and a low-power LCD display. And it had four ports that surveyors thought were practically magical.
Those four ports allowed users to plug in for extra memory, pre-programmed software modules, a magnetic card reader, an optical wand and printers. Plugging the HP-IL (for Interface Loop) module into one of these ports allowed devices that complied with the HP-IL standard-including a disk drive, printer and third-party devices-to be connected in a loop.
Fast-forwarding to the last of a glorious HP-developed line, the now legendary HP 48 was introduced in 1990. While almost all previous HP calculators were reverse Polish notation (RPN) operation, the HP 48 computing system saw the advent of Reverse Polish Lisp (RPL) operation in a portable unit. RPL was developed as an internal programming language for the HP 18C introduced in 1986. It was actually user-accessible with the HP 28 series introduced in 1987. The HP 48 (as its name implied) brought elements of the HP 41 and HP 28 together. It had a large 64 x 131 pixel graphics screen. And the equation solver and matrix entry, among other capabilities of the HP 28, now had more room to "stretch."
HP-based Surveying SoftwareIn 1983, while teaching at the University of Missouri, I was hired by The Lietz Co. (later acquired by Sokkia and now called Sokkia Corporation), to develop a data collector software program for its EDMs and semi-total stations. Semi-total stations (also known by some as "manual total stations") were optical theodolites with integrated EDMs. Distance could be electronically transmitted to a data collector, and the horizontal and angle values keyed in by hand. Some semi-total stations even had an electronically sensed vertical angle that could be electronically output as well.
Bob Martin, vice president-products for Lietz, agreed during discussions with me that a data collection software based on the HP 41 made sense since surveyors were flocking to the hardware device for their field and office calculation needs. Surveyors particularly liked it because, being the individualists most are, they could write their own surveying programs. But this effort eventually was diverted away from the HP 41 platform when Mike Beckingham, managing director of Sokkia's Australian subsidiary, created the SDR1 electronic data collector, the first in the series of Sokkia's trademarked Electronic Field Books, with Alan Townsend, manager of a New Zealand company, Datacom Software Research (later to become Trimble New Zealand). My work left the HP environment at that time when I joined with the SDR1 group and Bob Martin to develop the SDR2, and eventually the SDR20 series, SDR33 and SDR31. The Sokkia line of data collectors attracted a lot of attention because it provided innovative features and had, at least for those times, a superior user interface that allowed surveyors to get up and running with it quickly. But it was an expensive product-initially just under $4,000, and it originally only worked with other Sokkia products.
Enter Stanley Trent and Harold HayesIn East Tennessee, one of those individualist surveyors, a man named Stanley Trent, worked on his dream-a data collector based on the HP 41 that would perform all the critical field functions, including hand-entered capture of whatever data a modern surveying instrument would output. With 22 investors to bankroll his efforts, Stanley started a company called Surveyors Module Inc. (SMI) to develop and market his data collection product.
Trent was stretched thin conducting the development and the marketing efforts for his product called the CO-OP Module, which was introduced in 1983, so he went back to one of his original 22 investors named Harold Hayes for help. Hayes headed Hayes Instrument Co., a small business that he says was "still in my garage at that time and still struggling, [but] was making a little money." Hayes considered Trent's request for additional investment. He realized that most of Trent's other investors were surveyors who continued to rave about his product. So, he says, he "refreshed Trent's bank account with a $10,000 [advance] purchase of modules and began helping Stanley develop and market the module." This is when, as Hayes says, "things began to get better." Hayes carried the product through Hayes Instrument Co., and helped Trent market it in other ways, generally helping to make the CO-OP Module a success.
Trent made many trips to Corvallis, Ore., to meet with his module suppliers. On one of these trips, he met Dave Conklin, co-owner of a company called Firmware Specialists Inc. (FSI) in Corvallis. Conklin, a former HP employee, and another ex-HP em-ployee named Steve Chou, who had worked on the HP 41 and HP 75, met Trent in the late '80s. Conklin told Trent that he could develop an electronic interface for the HP 41 that would enable it to communicate with most total stations that used the RS-232C protocol. Because Trent's product was pushing the memory limits of the HP 41, and because addition of an RS-232C interface would consume a lot more power than the HP 41 alone, Conklin also told him that this interface device could be built with additional memory and battery power to overcome those problems. Lietz' SDR series could already communicate with Sokkia total stations using RS-232C. Trent saw a way to improve on this function even further. Being independent, he could provide electronic data collection for all brands of total stations, not just Sokkia instruments alone.
There's a tieback to Sokkia/Lietz in this story: after the SDR2 was on the market, Lietz's Bob Martin and I had been tasked to find an RS-232C interface for the HP 41 so it would work with Sokkia/Lietz instruments. Even though the SDR2 was a successful product, many surveyors asked Lietz for a way to interface the HP 41 with their home-built data collection programs to their total stations. Looking in HP country, we found a small start-up company founded by former HP employees. It was, indeed, the company FSI! Lietz contracted with FSI to develop, build and manufacture the HP IL to RS-232C interface. And it was this (non-exclusive) technology that FSI developed for Lietz that was leveraged into the HP 41 interface offered to Hayes.
With the successful creation of the interface, Hayes Instrument Co. introduced the Hayes HP 41 data collector to the world. With FSI (primarily Steve Chou adding drivers and resolving other software issues) now helping to create the HP 41 program modules, generically called CO-OP 41 modules, they took on the job of marketing and selling the module and the HP 41 interface in a data collection package in 15 states west of the Mississippi. Hayes marketed the product, now remembered as the SMI CO-OP HP 41 module, in Tennessee and five surrounding states; SMI covered the rest of the United States. The product initially sold by SMI was only the module. Trent then realized he would have a more complete product by selling the "data collector" that included the HP 41 interface, module, cables, instruction manuals and other accessories in one neat package. So, he negotiated with Hayes for the right to incorporate and sell the Hayes HP 41 data collector with his module, and according to Hayes, "the Hayes HP 41 data collector became the CO-OP 41 data collector in SMI's territory."
The TDS LinkDuring a period of flux, Chou left FSI in 1987 taking Bernie Musch, the former HP calculator development manager, with him to create a company to serve surveying software needs. They both took a surveying course at Oregon State University. Chou, the chief programmer of the company now known as Tripod Data Systems (TDS, Corvallis), became involved in the development of software improvements for the CO-OP module, especially the drivers to enable it to communicate with an ever-expanding array of electronic instruments. Musch managed the business aspects of the company. By then (1987), TDS was marketing the module under its own name and had added some software differentiation.
The complete separation between TDS and SMI (and Hayes) occurred with the introduction of the HP 48. TDS was able to get the help of Hewlett-Packard in advance of the introduction of the HP 48 to begin the job of "porting" the HP 41 software to the new platform. TDS even got advance prototypes several months in advance of the formal introduction of the product. SMI hired Trent's son, Ken, to begin its own development to compete with the TDS product.
HP 48-based Development at TDSThe HP 48 platform helped TDS to power its way to an overwhelming success with field data collection products for surveying. The company has since added PC-based software products, GIS software, and even created the Ranger from scratch, after recognizing the need for even more powerful software and processing power than the HP 48 could support or provide. Later, in 2003, TDS introduced the Recon data collector to replace the discontinued HP 48.
But the early days of the HP 48 were heady and somewhat uncertain at TDS. Dennis York, the software section manager at HP's calculator division in Corvallis, took a prototype HP 48 to his friends at TDS in late 1989. The market for calculators was such that surveying did not dominate the engineering or marketing of the HP calculators. But Hewlett-Packard did recognize that the HP 48 had more potential in the surveying market than HP could hope to take advantage of without third-party help. Development and marketing of a surveying oriented product on the HP 48 was (by now) outside of HP's field of expertise. Wanting a head-start for the HP 48 by having ready-made applications at the time of introduction, York thought he'd ask the folks at hometown TDS to take a look at it. TDS was selected because they seemed best positioned technically, promotionally, financially and managerially to succeed. Musch and Chou were initially dubious of the extra investment this product would take. The old HP 41 code would have to be re-written at best and "ported" at worst. Both options were expensive. York (today a TDS employee) recognized that surveying was one of those applications that could really prove the HP 48 by taking advantage of it. Finally, with the offer of a development system from Hewlett-Packard, TDS decided to tackle it. Prior to the introduction of its Ranger and Recon data collectors, the HP 48 product had been TDS' biggest success.
The Discontinuation of the HP 48When Hewlett-Packard announced the discontinuation of the HP 48 in 2003, the company only gave six months' notice and production had actually stopped. "We heard through the grapevine that the HP 48 was being discontinued, but there was no official announcement until 2003," says Bill Martin, then-marketing manager, now president of TDS. Recognizing the undying appreciation of surveyors for software based on this platform and having sold more than 100,000 modules during the lifetime of the product, TDS immediately asked Hewlett-Packard how many they could get of the remaining stock. After selling the HP 48-based products for more than 10 years, and having so many surveyor customers committed to the platform, "we wanted to do what we could to make sure that as many of the remaining new HP 48s in inventory would go out to TDS customers," Martin says. Together with product sourced from other wholesalers, TDS managed to get its hands on almost 2,500 units. The last HP 48 was sold in March 2004-fittingly to Hayes Instrument Co. According to Martin, Hayes has been one of TDS' best dealers with most likely the largest HP 48 customer base.
According to Eddie Clanton, president of Hayes Instrument Co., "When HP announced the 48's discontinuation, panic broke out. We looked everywhere and finally found that TDS had taken a stock of them. We had a lot of survey cards and accessories for the 48 but no 48s! Needless to say we put in an immediate order. And they've been selling like hotcakes. Now we're down to the last handful. They'll probably be gone by the time this shows up in print." | 1 | 2 |
<urn:uuid:29e0d3d9-318f-4cf4-9e88-ec2832ba662f> | The X Window System is the foundation of the graphical environments on Unix and Unix-like operating systems, such as Linux. Since Unix predates the graphical user interface and widespread availability of computer graphics, it has no built-in facility for graphics at the lowest levels of the system. However, as the de facto standard for graphical applications under Unix-like systems, X has been near-ubiquitous on such systems for the last 10-15 years. Unlike most graphics layers, the protocol between applications and the system is network-transparent, allowing programs from multiple machines to appear on a single display without requiring external support.
The X Window System was originally developed at MIT as part of their pioneering computing access program, Project Athena. X originated as an adaptation of Stanford University's W network window system to a much more efficient network protocol, completed by Bob Scheifler in May 1984. Unlike other graphics systems of the time, such as the Macintosh's QuickDraw, X was designed to be both hardware-independent and vendor-independent, since Project Athena intended to connect all systems at MIT regardless of their origin.
The initial implementation of X was quite limited, and over the next year it was extended in a number of backwards-incompatible ways. The result was X Version 9 or X9, which was the first version released under the permissive 'MIT License' in September 1985. This license would have an important effect on the future of the X Window System, allowing anyone to use and modify the code for any purpose whatsoever, provided that the original author's copyright notice is preserved and the recipient understands that the original author provides no warranty of any kind. These permissions have been broadly abused in the intervening time, but also ensured the widespread availability of X throughout the Unix world.
It soon became apparent that, despite the best efforts of X's designers, the X9 and X10 protocols were still quite hardware-dependent. A comprehensive redesign was begun under the joint auspices of MIT and Digital Equipment Corporation, which resulted in the release of X Version 11 in September 1987. Since then, there have been no backwards-incompatible changes in the core X protocol, an impressive record of compatibility.
X11 continued to be developed by a consortium centred at MIT for several years, with many important improvements made to the core X codebase. These changes did not modify the core X11 protocol itself but improved its efficiency, programming interface, and included tools. Ports to a variety of systems, including DEC's non-Unix OS VMS, were made, some of which were contributed back to the MIT X distribution, while some of them were retained as proprietary software by their authors, as the MIT license permits. During the late 1980s and early 1990s X was one theatre of the so-called 'Unix wars' with various vendors adding proprietary extensions to their own versions in an attempt to achieve differentiation from their competitors.
In 1991, X11 Release 5 was released, containing many improvements including a port to the x86-based 'PC' architecture called X386. With the emergence of free, open source Unix variants and lookalikes for the 32-bit x86 platform, this port grew in importance and soon had its own community of maintainers. These maintainers eventually broke with the original authors after they began to make their new versions proprietary, forming the XFree86 project. Over the course of the 1990s, this project would move to the forefront of X development.
The MIT X Consortium shut down at the end of 1996, with its series of successors becoming increasingly ineffectual due to the influence of differentiation-craving proprietary Unix vendors. The leverage of XFree86 as the most active developer of open source X prevented the reference implementation from switching to a new restrictive license for X11R6.4, but the reference implementation continued to rot under neglect.
XFree86 succeeded in modernizing the architecture of X's core with XFree86 4.0, but afterwards XFree86 development began to stagnate. Development was controlled quite tightly by a Core Team with commit access to the repository, and several members of this team were reluctant to give up control. While the burgeoning Linux desktop was making ever heavier demands of X, the XFree86 developers were slow to change in response.
This situation came to a head in early 2004, when the lead developer of XFree86, David Dawes, unilaterally changed the XFree86 license. The new license was widely vilified as being incompatible with the most common free software license, the GNU GPL. As a result a group of developers, led by longtime X developers Keith Packard and Jim Gettys, created a fork from the last MIT-licensed XFree86 release under the auspices of freedesktop.org. A new X consortium, the X.Org Foundation, was created by the old, moribund X.org organization and freedesktop.org, and the XFree86 fork, generally called Xorg, was imported as the new reference X implementation. A resurgence in X development followed, with the resulting changes making a noticeable difference in the modern Linux desktop.
Under the stewardship of the new X.org, the reference X distribution quickly replaced XFree86 in most Linux distributions and BSD flavours. Though their first release, Xorg 6.7.0, was very little different to the poorly-licensed XFree86 4.4, the writing was on the wall and most distributions were running Xorg by early 2005. Rarely has a software fork replaced its progenitor so quickly, but virtually all X developers doing new work were founders of the fork or quickly joined. This included Keith Packard, architect of many of XFree86's innovative features, whose vocal public clashes with the XFree86 core development team were key in generating the will to fork.
The first Xorg release with a full release cycle, 6.8.0 followed in September 2004, which included a number of important additions in different degrees of stability. Most prominent but most experimental was the XComposite extension, which allowed much greater control over the display of windows. While this initially was used to implement true transparent windows, it permits a variety of uses including screen magnification for accessibility and the presence of ordinary X windows within a fully-3D interface such as OpenCroquet.
The much-delayed release of Xorg 7.0 brought with it many important improvements, but none as prominent as the division of the core X distribution into discrete modules with their own release cycles, which succeeded in accelerating the introduction of X improvements into distributions and also in lowering the barrier to entry for new X developers. Since 7.0 there have not been as many sweeping changes but a slow, ceaseless introduction of new features, bugfixes, and cleanups, similar to the Linux kernel or GNOME. X has finally joined the rest of the modern free software ecosystem.
The X distribution primarily consists of a display management program called the X server, a protocol that other programs, called X clients can use to display themselves on the X server's screen, and a library for writing X clients without detailed network programming, Xlib. A complete X install also includes a variety of basic clients and utilities for managing the general X environment. These parts were traditionally bundled together into a single software package, but, as of Xorg version 7, the X distribution is divided into a number of individual packages with their own release schedule.
The original X was amazingly primitive by modern standards. Lacking a facility for managing windows or launching programs, early X sessions consisted of a preselected list of programs started and positioned in the .xinitrc configuration file. Any additional programs run from a terminal emulator also required manual specification of position and size on the command line. The X developers' response to this was typical of the X development process; rather than adding window management and automatic window borders to the X server, they added a special X application, the window manager, whose sole purpose is to handle the placement, movement, and decoration of windows.
The introduction of window management highlighted an issue that has come back to haunt X many times over the years; many operations that are fundamental 'server-side' parts of other display layers require cooperation by X clients to implement. The Inter-Client Communication Conventions Manual (ICCCM) was promulgated in an attempt to standardize this cooperation, but it is a complex standard that is difficult to implement and contains a number of now-archaic restrictions. This confusion was responsible for at least a decade of poor X usability before high-level free software toolkits that abstracted away the costs of the ICCCM became ubiquitous in the early 2000s.
Another facility that X does not provide is a set of standard graphical interface elements. The X protocol is extremely low-level, dealing mainly with maintaining square drawing canvasses and routing input events to applications. An additional library above Xlib called a widget toolkit is required to produce buttons, scrollbars, menus, and text boxes, in another instance of the general X convention 'mechanism, not policy'. X distributions do contain an extremely primitive widget library called the Athena widgets or Xaw, but unfortunately Xaw widgets are more primitive than even the original Macintosh widgets, and have a number of conventions quite unlike any other widget set, especially in its idiosyncratic scroll bars.
As a result, a variety of widget toolkits have appeared over X's long lifetime. Early on, the OpenLook and Motif toolkits were available, though both were reasonably proprietary. Motif gained prominence as the basis for the first integrated desktop environment for X, the Common Desktop Environment or CDE. However, as the 1990s wore on, the heavy Windows 3.1-like Motif controls fell out of fashion, and began to dwindle when a group of students at Berkeley, frustrated with the Motif licensing terms, began developing their more modern, free replacement, GTK+. With the GIMP image editor driving GTK+ development and the Norwegian startup Trolltech releasing the X11 version of their cross-platform Qt GUI toolkit to the community, Motif's downfall became inevitable. It is a testament to the power of X's architecture that the system was not stranded with an antiquated 1980s look-and-feel and programming interface but could move ahead maintaining both backwards and forwards compatibility.
An important part of the core X protocol is a mechanism for adding new capabilities to the X protocol, generally referred to as extensions. The availability of this feature has allowed the interaction between the X server and its clients to be drastically altered without breaking compatibility. A number of important features of the modern toolkits depend on extensions, rather than the core protocol, such as the SHAPE extension for non-rectangular windows and widgets, and the MIT-SHM extension for the transfer of images through shared memory rather than the command stream.
The modularity of the X protocol also allows for many of the client-side libraries to be compatibly changed without modifying the server. Most X applications communicate with the server through Xlib, a library that attempts to smooth some of the complexities of the full X protocol by handling certain things itself. The assumptions that were made during Xlib's development in the mid-1980s have not aged well, and many of the details that it 'takes care of' have become a hindrance to modern toolkits as many of these details have become more relevant over the years. In particular, the single command queue used by Xlib limits the use of multi-threading in applications. Furthermore, the ancient Xlib code is known to have errors that are almost impossible to find in its ageing codebase.
When Xlib was written, many X applications were built directly upon it, necessitating an API that was somewhat 'user-friendly'. In the last decade, very few programs have been written on bare Xlib, using a higher-level toolkit such as GTK+ or Qt instead. As such, the authors of the new XCB (X C Bindings) library have reasoned that a small, simple library for interfacing directly to the X protocol would be useful, as its added difficulties would only be a hindrance to relatively few developers. XCB has now reached full stability, with the Xlib in Xorg 7.2 and later having been re-written on top of XCB to maintain compatibility. The toolkits have not yet been rewritten to use XCB instead of Xlib but most believe that it is only a matter of time.
Under the stewardship of the X.Org Foundation and freedesktop.org, the standard X distribution has broken out of its development rut and has become one of the more active areas for improvement of the free software desktop. With the KDE and GNOME desktop environments building modern user interfaces on X and their underlying Qt and GTK+ toolkits allowing applications to be built with similar ease as under other major platforms, X has emerged as an important competitor for the graphical interfaces of other major operating systems.
An important part of modern X is its efficiency in the most common case of a server and client running on the same machine. Lacking a network separating the two programs, they can communicate through an efficient inter-process communication method such as Unix domain sockets, and use shared-memory image transfer to speed up the most message-intensive operations. When these methods are used, X is similar in weight to competing display systems, which have themselves adopted a window server/client architecture similar to X.
One of the first tasks undertaken by X.Org was a full modularization of the core X distribution. Following this separation, the X server, client libraries, graphics drivers, and utilities could be updated independently, with semi-yearly comprehensive 'katamari' releases maintaining a common baseline. The separation of drivers from the main X server code base has enabled rapid development of many drivers, especially those for Intel and ATI graphics hardware.
The most important modern addition to X's display model is the Xrender extension. Xrender adds Porter-Duff composition as a basic display operation, and although this sounds somewhat obscure its effects are widespread and impressive. The most common use of Xrender is to display smooth, anti-aliased graphics and text, which the Xrender architecture is designed to accelerate. The advent of anti-aliased text coincided with the wholesale replacement of X's antiquated, user-hostile font system, finally ending most Linux users' font headaches.
The composition capabilities of Xrender can be used to more spectacular effect in combination with the Xcomposite extension. Xcomposite allows a particular client, called the compositing manager, to intercept window contents before they are drawn to the screen, and then to combine the windows in whatever way it sees fit to produce the final screen contents. This opens the door for advanced desktop effects similar to those used by Mac OS X and Windows Vista, with or without GPU acceleration. The Compiz window manager was built from the ground up to include a compositing manager, and is used in a number of modern distributions including Ubuntu and openSUSE to provide translucent windows, drop shadows, and Expose-like window switching, among other things.
Composition managers are not only for fancy 'bling' effects, though. Accessibility applications benefit greatly from the ability to modify the appearance of a window before it is shown, as magnification or increased contrast of windows can help visually impaired users (or even users on poor displays) use the computer's GUI. Composition can also help with the use of virtual desktops, as the desktop can be made to 'slide' across, providing a visual cue to the user. While this is most useful to the novice user, power users may also find benefits from the composition manager. Compiz can display virtual desktops on the faces of a 3D polygonal prism, called the 'desktop cube' for the common case of four desktops, which can be freely rotated to visualize and choose a desktop.
The new Plasma desktop in the KDE 4 desktop uses composition as the basis of its attempt to redefine the basic GUI elements for the better. While it is still beta software and currently implements a relatively conventional, though very pretty, desktop, Plasma aims to be a test-bed for new styles of human-computer interaction. Surely, the promise of pervasive desktop composition has yet to be fully realized by the more creative programmers in the free software world.
The X Window System has come a long way from its cradle at MIT to become the keystone of the widespread free software desktop. Through all of this it has maintained an astounding amount of compatibility; a client from the late 1980s will still be able to display on today's newest servers, even without a recompile (though they would be unlikely to work on the same machine). While the primitive nature of its drawing model limited it to the most powerful computers when it was first released, X11's overhead is now comparable to or even less than competing display layers from Apple and Microsoft. It remains under active development and will continue to be optimized and extended for many years to come.
This writeup is copyright 2008 D.G. Roberge and is released under the Creative Commons Attribution-NonCommercial-ShareAlike licence. Details can be found at http://creativecommons.org/licenses/by-nc-sa/3.0/ . | 1 | 17 |
<urn:uuid:f19ac0ff-b7de-46ba-909d-22eb41283f90> | Digital Camera Buying Guide - More Choosing
Digital Camera Choosing
Continue reading this page to learn more about choosing between digital cameras. For more important factors, go back to Digital Camera Choosing Basics, step 2 of this Digital Camera Buying Guide.
There are quite a few types of memory cards, which is where all modern cameras store the photos and video they capture. The SD-HC format is by far the most common one and is also the cheapest per capacity. Just a few non-DSLRs will not accept SD-HC cards and entry to mid-range DSLRs as well. The high-end memory is still Compact Flash due to its potential for fast transfer speeds and high capacities.
Current prices for memory cards are sufficiently low that one should not give much importance to which memory card type a digital camera uses. There are far more important features to choose!
|Compact flash memory is realively cheap, it is available in the largest capacities and the fastest speeds.|
|Some cameras only accept Type 1 Compact Flash cards, which are slimmer than Type II cards. This effectively means no MicrodrivesMicrodrives are tiny hard drives. They used to be quite economical and available in large capacities, this is no longer the case. Microdrives are more fragile than memory cards and cannot operate above 10,000ft of altitude..|
|This Compact Flash successor uses a high-speed interface and a smaller form-factor similar to SD cards. It offers faster read speeds (from 500 MB/s) and faster write speeds (from 125 MB/s).|
|SD memory is very common and has the widest compatibility among devices such as digital photo-frames, card-readers and laptops.|
|SD-HC cards are high-capacity SD cards which are generally not compatible with SD devices. SD cards can always be used where SD-HC cards are accepted. SD-HC are now the cheapest memory cards and are the most commonly accepted among digital cameras.|
|SD-XC cards are extended capacity SD-HC cards that support sizes above 32GB, theoretically up to 2 TB. Any camera which accepts SD-XC cards will accept SD-HC and SD cards as well. The reverse is not true though, so SD-XC cards are only accepted in SD-XC compatible cameras and devices.|
|Micro SD-HC cards are small versions of SD-HC cards. Originally used in cellular phones, some cameras come with adaptors to use a Micro SDHC card instead of their native memory type.|
|Micro SD-XC cards are small versions of SD-XC cards. Although mostly common in cellular phones, some ultra-compact digital cameras accept this type of memory too.|
|xD can be found in Olympus and Fuji cameras, it is one of the most expensive memories, limited in capacity and rather slow. Modenn Fuji cameras either accept both xD and SD-HC or only SD-HC. Certain Olympus cameras take Micro-SD cards using an adapter which fits in the xD slot.|
|Memory Stick are used by Sony cameras and are also quite expensive. The Pro version is faster but otherwise identical.|
|Memory Stick Duo is simply a smaller versionA Memory Stick Duo can be used in a Memory Stick slot using an adapter but not vice-versa. of the Memory Stick. There is also a faster Pro version.|
Digital cameras preview images either using an LCD screen or a kind of viewfinder. LCD displays can be hard to see in bright light due to unwanted reflexions and exposure to direct sunlight. A viewfinder is preferable but rarely available on small cameras, particularly ultra-compact models. The general advantages of a viewfinder are that it rarely reflects stray light and it provides an extra point of stability for precise framing. Many types of viewfinders exist:
- Electronic viewfinders are tiny LCD displays that preview the image as seen by the sensor. They can be extremely accurate in terms of exposure, color, white-balance and framing. Except for those used by now-defunct Konica-Minolta, they are hard to see in low-light. Top of the line EVFs currently have enough precision to judge focus.
- An optical reflex viewfinder is highly recommended for night photography and continuous shooting. Judging focus is rather easy with a reflex viewfinder which is needed for precise manual focusing. On the other hand, optical viewfinders do not preview exposure, color or white-balance.
- An optical tunnel is formed using a second lens above the camera's photographic lens. It is sometimes but rarely seen on compact digital cameras. This allows coarse framing when the LCD is not usuable due to movement or bright light.
- Viewfinder Coverage is the visible percentage of the final picture. The closer to 100% the better. Most EVFs and LCDs show 100% coverage but optical viewfinders generally only show 95% coverage. 100% coverage viewfinders are found on most high-end DSLRs since they are essential to professionals.
- A higher magnification viewfinder shows a larger view and is preferable. It allows more comfortable viewing of the subject and better judement for focus and depth-of-field.
Standard size batteries such as "AA" are highly preferable:
- They cost considerably less than any other type of battery.
- These batteries can easily be replaced by disposable ones found almost anywhere in the world.
- AAs keep getting better. Year-after-year, manufacturers produce more powerful and longer lasting versions.
- Solar-chargers are readily available.
Custom Lithium-Ion batteries may last longer than a single set of rechargeable AA batteries but you can afford several sets of AAs for the same price as a one Lithium-Ion battery. Plus, in an emergency it would be nearly impossible to find the right battery since there are so many different models.
Avoid cameras that charge in docks, you cannot use those cameras while recharging them unless the dock can charge a spare too. Charger docks are also an extra thing to carry while traveling.
Weather & Underwater
Weatherproof cameras are designed to withstand adverse weather without actually being submerged under water. They can easily stand rain, snow and dust. Note that a weather-sealed DSLR requires the use of a weather-sealed lens to remain weather-sealed.
Waterproof cameras can actually be submerged under water up to a maximum depth dictated by the camera specifications. This is usually between 3m (10') and 10m (33'), so this is usable for swimming and snorkeling but not for SCUBA diving.
The general solution for deep immersion is to use a specially designed underwater casing. Those are almost always model specific, so if this is a requirement, you must check for availability before deciding on a camera.
New Cameras & Lenses
Sony Alpha SLT-A5820 Megapixels Mirrorless (SLD)
Sony Alpha Lens Mount
Pentax Q 07 Mount ShieldPentax Q Mount Prime Lens
Pentax Q712 Megapixels Mirrorless (SLD)
Pentax Q Lens Mount
Pentax K-50016 Megapixels DSLR
Pentax K Lens Mount
Pentax K-5016 Megapixels DSLR
Pentax K Lens Mount
Weatherproof down to -10C
Canon EF-M 11-22mm F/4-5.6 IS STMStabilized
Canon M Mount Zoom
Nikon D7100 Review
24 Megapixels without Anti-Alias filter. ISO 100-25600, 6 FPS, 1080p HD Video, Dual Control-Dials, 100% Coverage Viewfinder, Weather-Sealed. This is the Nikon flagship APS-C DSLR.
Fuji X20 Review
Premium compact sporting a unique 12 MP X-Trans CMOS II sensor with built-in Phase-Detect AF and a bright F/2-2.8 mechanically-linked wide-angle 4X optical zoom. Dual control-dials, 3" LCD and optical tunnel viewfinder with focus-point and exposure parameters overlaid.
Handbook of Bird Photography Book Review
Review of The Handbook of Bird Photography by Markus Varesvuo, Jari Peltomaki and Bence Mate.
Digital Capture After Dark Book Review
Review of Digital Capture After Dark.
Nikon D5200 Review
24 megapixels APS-C entry-level DSLR with 39-point AF, 5 FPS drive and full 1080p HD video. ISO 100 to 25600. Night Vision up to ISO 102400 in B&W.
Panasonic Lumix DMC-GH3 Review
Flagship Panasonic mirrorless with triple control-dials and a weather-sealed body. 16 megapixels sensor, ISO 125-25600, 6 FPS, 1080p HD @ 60 FPS with stereo sound input and output, plus clean 1080p HDMI. WiFi.
Nikon Coolpix A Review
Premium compact with an 16 megapixels APS-C CMOS sensor without anti-alias filter and a 28mm F/2.8 prime lens.
Olympus PEN E-PL5 Review
16 Megapixels compact Micro Four-Thirds mirrorless without Anti-Alias filter. 8 FPS drive, 1080p HD video, tilting 3" LCD.
Exclusive Olympys Stylus Tough TG-2 Review
Exclusive review of the flagship rugged camera from Olympus. The Stylus Tough TG-2 features a bright F/2 ultra-wide lens and is waterproof to 15m, freezeproof to -10C, shockproof to 2.1m and crushproof to 100kg. A built-in GPS, digital-compass and manometer make it great for adventure.
Nikon 1 J3 Review
14 Megapixels mirrorless camera with a very compact body. High-Speed CMOS sensor with Phase-Detect AF, 60 FPS drive, 1/16000s top shutter-speed, 1080p HD video. Ultra-quiet electronic shutter. | 1 | 12 |
<urn:uuid:8030e55f-a083-4f97-b9f5-8556dc738263> | A computer network, often simply referred to as a network, is a collection of computers and devices interconnected by communications channels that facilitate communications and allows sharing of resources and information among interconnected devices. Put more simply, a computer network is a collection of two or more computers linked together for the purposes of sharing information, resources, among other things. Computer networking or Data Communications (Datacom) is the engineering discipline concerned with computer networks. Computer networking is sometimes considered a sub-discipline of electrical engineering, telecommunications, computer science, information technology and/or computer engineering since it relies heavily upon the theoretical and practical application of these scientific and engineering disciplines.
A sample overlay network: IP over SONET over Optical
Networks may be classified according to a wide variety of characteristics such as medium used to transport the data, communications protocol used, scale, topology, organizational scope, etc.
A communications protocol defines the formats and rules for exchanging information via a network. Well-known communications protocols are Ethernet, which is a family of protocols used in LANs, the Internet Protocol Suite, which is used not only in the eponymous Internet, but today nearly ubiquitously in any computer network.
Before the advent of computer networks that were based upon some type of telecommunications system, communication between calculation machines and early computers was performed by human users by carrying instructions between them. Many of the social behaviors seen in today's Internet were demonstrably present in the nineteenth century and arguably in even earlier networks using visual signals.
- In September 1940 George Stibitz used a teletype machine to send instructions for a problem set from his Model at Dartmouth College to his Complex Number Calculator in New York and received results back by the same means. Linking output systems like teletypes to computers was an interest at the Advanced Research Projects Agency (ARPA) when, in 1962, J.C.R. Licklider was hired and developed a working group he called the "Intergalactic Network", a precursor to the ARPANET .
- Early networks of communicating computers included the military radar system Semi-Automatic Ground Environment (SAGE), started in the late 1950s
- The commercial airline reservation system Semi-Automatic Business Research Environment (SABRE) which went online with two connected mainframes in 1960.
- In 1964, researchers at Dartmouth developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at Massachusetts Institute of Technology , a research group supported by General Electric and Bell Labs used a computer to route and manage telephone connections.
- Throughout the 1960s Leonard Kleinrock , Paul Baran and Donald Davies independently conceptualized and developed network systems which used packets that could be used in a network between computer systems.
- 1965 Thomas Merrill and Lawrence G. Roberts created the first wide area network (WAN).
- The first widely used telephone switch that used true computer control was introduced by Western Electric in 1965.
- In 1969 the University of California at Los Angeles , the Stanford Research Institute , University of California at Santa Barbara , and the University of Utah were connected as the beginning of the ARPANET network using 50 kbit/s circuits.
- Commercial services using X.25 were deployed in 1972, and later used as an underlying infrastructure for expanding TCP/IP networks.
Today, computer networks are the core of modern communication. All modern aspects of the Public Switched Telephone Network (PSTN) are computer-controlled, and telephony increasingly runs over the Internet Protocol, although not necessarily the public Internet. The scope of communication has increased significantly in the past decade, and this boom in communications would not have been possible without the progressively advancing computer network. Computer networks, and the technologies needed to connect and communicate through and between them, continue to drive computer hardware, software, and peripherals industries. This expansion is mirrored by growth in the numbers and types of users of networks from the researcher to the home user.
- LAN - Local Area Network
- WLAN - Wireless Local Area Network
- WAN - Wide Area Network
- MAN - Metropolitan Area Network
- SAN - Storage Area Network, System Area Network, Server Area Network, or sometimes Small Area Network
- VPN - virtual private network
Using a network, people can communicate efficiently and easily via email, instant messaging, chat rooms, telephone, video telephone calls, and video conferencing.
Permit sharing of files, data, and other types of information
In a network environment, authorized users may access data and information stored on other computers on the network. The capability of providing access to data & information on shared storage devices is an important feature of many networks.
Share network and computing resources
In a networked environment, each computer on a network may access and use resources provided by devices on the network, such as printing a document on a shared network printer. Distributed computing uses computing resources across a network to accomplish tasks.
May be insecure
A computer network may be used by computer hackers to deploy computer viruses or computer worms on devices connected to the network, or to prevent these devices from normally accessing the network ( denial of service ).
May interfere with other technologies
Power line communication strongly disturbs certain forms of radio communication, e.g., amateur radio. It may also interfere with last mile access technologies such as ADSL and VDSL .
May be difficult to set up
A complex computer network may be difficult to set up. It may also be very costly to set up an effective computer network in a large organization or company.
Computer networks can be classified according to the hardware and associated software technology that is used to interconnect the individual devices in the network, such as electrical cable (HomePNA, power line communication, G.hn), optical fiber, and radio waves (wireless LAN). In the OSI model, these are located at levels 1 and 2.
A well-known family of communication media is collectively known as Ethernet. It is defined by IEEE 802 and utilizes various standards and media that enable communication between devices. Wireless LAN technology is designed to connect devices without wiring. These devices use radio waves or infrared signals as a transmission medium.
- Twisted pair wire is the most widely used medium for telecommunication. Twisted-pair cabling consist of copper wires that are twisted into pairs. Ordinary telephone wires consist of two insulated copper wires twisted into pairs. Computer networking cabling (wired Ethernet as defined by IEEE 802.3 ) consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction . The transmission speed ranges from 2 million bits per second to 10 billion bits per second. Twisted pair cabling comes in two forms which are Unshielded Twisted Pair (UTP) and Shielded twisted-pair (STP) which are rated in categories which are manufactured in different increments for various scenarios.
- Coaxial cable is widely used for cable television systems, office buildings, and other work-sites for local area networks. The cables consist of copper or aluminum wire wrapped with insulating layer typically of a flexible material with a high dielectric constant, all of which are surrounded by a conductive layer. The layers of insulation help minimize interference and distortion. Transmission speed range from 200 million to more than 500 million bits per second.
- ITU-T G.hn technology uses existing home wiring ( coaxial cable , phone lines and power lines ) to create a high-speed (up to 1 Gigabit/s) local area network.
- Optical fiber cable consists of one or more filaments of glass fiber wrapped in protective layers that carries data by means of pulses of light. It transmits light which can travel over extended distances. Fiber-optic cables are not affected by electromagnetic radiation. Transmission speed may reach trillions of bits per second. The transmission speed of fiber optics is hundreds of times faster than for coaxial cables and thousands of times faster than a twisted-pair wire. This capacity may be further increased by the use of colored light, i.e., light of multiple wavelengths. Instead of carrying one message in a stream of monochromatic light impulses, this technology can carry multiple signals in a single fiber.
- Terrestrial microwave Terrestrial microwaves use Earth-based transmitter and receiver. The equipment looks similar to satellite dishes. Terrestrial microwaves use low-gigahertz range, which limits all communications to line-of-sight. Path between relay stations spaced approx, 48 km (30 miles) apart. Microwave antennas are usually placed on top of buildings, towers, hills, and mountain peaks.
- Communications satellites The satellites use microwave radio as their telecommunications medium which are not deflected by the Earth's atmosphere. The satellites are stationed in space, typically 35,400 km (22,200 miles) (for geosynchronous satellites) above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.
- Cellular and PCS systems Use several radio communications technologies. The systems are divided to different geographic areas. Each area has a low-power transmitter or radio relay antenna device to relay calls from one area to the next area.
- Wireless LANs Wireless local area network use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. An example of open-standards wireless radio-wave technology is IEEE.
- Infrared communication can transmit signals between devices within small distances of typically no more than 10 meters. In most cases, line-of-sight propagation is used, which limits the physical positioning of communicating devices.
- A global area network (GAN) is a network used for supporting mobile communications across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off the user communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial wireless LANs
There have been various attempts at transporting data over more or less exotic media:
- IP over Avian Carriers was a humorous April fool's Request for Comments , issued as RFC 1149 . It was implemented in real life in 2001.
- Extending the Internet to interplanetary dimensions via radio waves.
A practical limit in both cases is the round-trip delay time which constrains useful communication.
A communications protocol defines the formats and rules for exchanging information via a network and typically comprises a complete protocol suite which describes the protocols used at various usage levels. An interesting feature of communications protocols is that they may be - and in fact very often are - stacked above each other, which means that one is used to carry the other. The example for this is HTTP running over TCP over IP over IEEE 802.11, where the second and third are members of the Internet Protocol Suite, while the last is a member of the Ethernet protocol suite. This is the stacking which exists between the wireless router and the home user's personal computer when surfing the World Wide Web.
Communication protocols have themselves various properties, such as whether they are connection-oriented versus connectionless, whether they use circuit mode or packet switching, or whether they use hierarchical or flat addressing.
There exist a multitude of communication protocols, a few of which are described below.
Ethernet is a family of connectionless protocols used in LANs, described by a set of standards together called IEEE 802 published by the Institute of Electrical and Electronics Engineers. It has a flat addressing scheme and is mostly situated at levels 1 and 2 of the OSI model. For home users today, the most well-known member of this protocol family is IEEE 802.11, otherwise known as Wireless LAN (WLAN). However, the complete protocol suite deals with a multitude of networking aspects not only for home use, but especially when the technology is deployed to support a diverse range of business needs. MAC bridging (IEEE 802.1D) deals with the routing of Ethernet packets using a Spanning Tree Protocol, IEEE 802.1Q describes VLANs, and IEEE 802.1X defines a port-based Network Access Control protocol which forms the basis for the authentication mechanisms used in VLANs, but also found in WLANs - it is what the home user sees when they have to enter a "wireless access key".
Internet Protocol Suite
The Internet Protocol Suite is used not only in the eponymous Internet, but today nearly ubiquitously in any computer network. While at the Internet Protocol (IP) level it operates connectionless, it also offers a connection-oriented service layered on top of IP, the Transmission Control Protocol (TCP). Together, TCP/IP offers a semi-hierarchical addressing scheme (IP address plus port number).
Synchronous Optical NETworking (SONET) and Synchronous Digital Hierarchy (SDH) are standardized multiplexing protocols that transfer multiple digital bit streams over optical fiber using lasers. They were originally designed to transport circuit mode communications from a variety of different sources, primarily to support real-time, uncompressed, circuit-switched voice encoded in PCM format. However, due to its protocol neutrality and transport-oriented features, SONET/SDH also was the obvious choice for transporting Asynchronous Transfer Mode (ATM) frames.
Asynchronous Transfer Mode
Asynchronous Transfer Mode (ATM) is a switching technique for telecommunication networks. It uses asynchronous time-division multiplexing and encodes data into small, fixed-sized cells . This differs from other protocols such as the Internet Protocol Suite or Ethernet that use variable sized packets or frames . ATM has similarity with both circuit and packet switched networking. This makes it a good choice for a network that must handle both traditional high-throughput data traffic, and real-time, low-latency content such as voice and video. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the actual data exchange begins.
While the role of ATM is diminishing in favor of next-generation networks , it still plays a role in the last mile , which is the connection between an Internet service provider and the home user. For an interesting write-up of the technologies involved, including the deep stacking of communications protocols used, see.
Networks are often classified by their physical or organizational extent or their purpose, such as
- Personal area network
- Home network
- Local area network
- Storage area network
- Campus area network
- Wide area network
- Metropolitan area network
- Virtual private network
Usage, trust level, and access rights differ between these types of networks.
Personal area network
A personal area network (PAN) is a computer network used for communication among computer and different information technological devices close to one person. Some examples of devices that are used in a PAN are personal computers, printers, fax machines, telephones, PDAs, scanners, and even video game consoles. A PAN may include wired and wireless devices. The reach of a PAN typically extends to 10 meters. A wired PAN is usually constructed with USB and Firewire connections while technologies such as Bluetooth and infrared communication typically form a wireless PAN.
Local area network
A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as home, school, computer laboratory, office building, or closely positioned group of buildings. Each computer or device on the network is a node. Current wired LANs are most likely to be based on Ethernet technology, although new standards like ITU-T G.hn also provide a way to create a wired LAN using existing home wires (coaxial cables, phone lines and power lines).
Typical library network, in a branching tree topology and controlled access to resources
All interconnected devices must understand the network layer (layer 3), because they are handling multiple subnets (the different colors). Those inside the library, which have only 10/100 Mbit/s Ethernet connections to the user device and a Gigabit Ethernet connection to the central router, could be called "layer 3 switches" because they only have Ethernet interfaces and must understand IP. It would be more correct to call them access routers, where the router at the top is a distribution router that connects to the Internet and academic networks' customer access routers.
The defining characteristics of LANs, in contrast to WANs (Wide Area Networks), include their higher data transfer rates, smaller geographic range, and no need for leased telecommunication lines. Current Ethernet or other IEEE 802.3 LAN technologies operate at speeds up to 10 Gbit/s. This is the data transfer rate. IEEE has projects investigating the standardization of 40 and 100 Gbit/s. Local Area Networks can be connected to Wide area network by using routers.
A home network is a residential LAN which is used for communication between digital devices typically deployed in the home, usually a small number of personal computers and accessories, such as printers and mobile computing devices. An important function is the sharing of Internet access, often a broadband service through a cable TV or Digital Subscriber Line (DSL) provider.
Storage area network
A campus network is a computer network made up of an interconnection of local area networks (LANs) within a limited geographical area. The networking equipment (switches, routers) and transmission media (optical fiber, copper plant, Cat5 cabling etc.) are almost entirely owned (by the campus tenant / owner: an enterprise, university, government etc.).
In the case of a university campus-based campus network, the network is likely to link a variety of campus buildings including, for example, academic colleges or departments, the university library, and student residence halls.
A Backbone network or network backbone is part of a computer network infrastructure that interconnects various pieces of network, providing a path for the exchange of information between different LANs or subnetworks. A backbone can tie together diverse networks in the same building, in different buildings in a campus environment, or over wide areas. Normally, the backbone's capacity is greater than that of the networks connected to it.
A large corporation which has many locations may have a backbone network that ties all of these locations together, for example, if a server cluster needs to be accessed by different departments of a company which are located at different geographical locations. The equipment which ties these departments together constitute the network backbone. Network performance management including network congestion are critical parameters taken into account when designing a network backbone.
A specific case of a backbone network is the Internet backbone, which is the set of wide-area network connections and core routers that interconnect all networks connected to the Internet.
Metropolitan area network
A Metropolitan area network (MAN) is a large computer network that usually spans a city or a large campus.
Wide area network
A wide area network (WAN) is a computer network that covers a large geographic area such as a city, country, or spans even intercontinental distances, using a communications channel that combines many types of media such as telephone lines, cables, and air waves. A WAN often uses transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI reference model: the physical layer, the data link layer, and the network layer. Wide area network is formed by interconnecting Local Area network.
Sample EPN made of Frame relay WAN connections and dialup remote access.
Enterprise private network
An enterprise private network is a network built by an enterprise to interconnect various company sites, e.g., production sites, head offices, remote offices, shops, in order to share computer resources.
Virtual private network
A virtual private network (VPN) is a computer network in which some of the links between nodes are carried by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by physical wires. The data link layer protocols of the virtual network are said to be tunneled through the larger network when this is the case. One common application is secure communications through the public Internet, but a VPN need not have explicit security features, such as authentication or content encryption. VPNs, for example, can be used to separate the traffic of different user communities over an underlying network with strong security features.
VPN may have best-effort performance, or may have a defined service level agreement (SLA) between the VPN customer and the VPN service provider. Generally, a VPN has a topology more complex than point-to-point.
Sample VPN used to interconnect 3 offices and remote users
An internetwork is the connection of two or more private computer networks via a common routing technology (OSI Layer 3) using routers. The Internet can be seen as a special case of an aggregation of many connected internetworks spanning the whole earth. Another such global aggregation is the telephone network.
Networks are typically managed by organizations which own them. According to the owner's point of view, networks are seen as intranets or extranets. A special case of network is the Internet , which has no single owner but a distinct status when seen by an organizational entity - that of permitting virtually unlimited global connectivity for a great multitude of purposes.
Intranets and extranets
Intranets and extranets are parts or extensions of a computer network, usually a local area network.
An intranet is a set of networks, using the Internet Protocol and IP-based tools such as web browsers and file transfer applications, that is under the control of a single administrative entity. That administrative entity closes the intranet to all but specific, authorized users. Most commonly, an intranet is the internal network of an organization. A large intranet will typically have at least one web server to provide users with organizational information.
An extranet is a network that is limited in scope to a single organization or entity and also has limited connections to the networks of one or more other usually, but not necessarily, trusted organizations or entities—a company's customers may be given access to some part of its intranet—while at the same time the customers may not be considered trusted from a security standpoint. Technically, an extranet may also be categorized as a CAN, MAN, WAN, or other type of network, although an extranet cannot consist of a single LAN; it must have at least one connection with an external network.
The Internet is a global system of interconnected governmental, academic, corporate, public, and private computer networks. In other words, the Internet is a worldwide interconnection of computers and networks which are either owned privately or publicly. It is based on the networking technologies of the Internet Protocol Suite. It is the successor of the Advanced Research Projects Agency Network (ARPANET) developed by DARPA of the United States Department of Defense. The Internet is also the communications backbone underlying the World Wide Web (WWW).
Participants in the Internet use a diverse array of methods of several hundred documented, and often standardized, protocols compatible with the Internet Protocol Suite and an addressing system (IP addresses) administered by the Internet Assigned Numbers Authority and address registries. Service providers and large enterprises exchange information about the reachability of their address spaces through the Border Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission paths.
A network topology is the layout of the interconnections of the nodes of a computer network. Common layouts are:
- A bus network : all nodes are connected to a common medium along this medium. This was the layout used in the original Ethernet , called 10BASE5 and 10BASE2 .
- A star network : all nodes are connected to a special central node. This is the typical layout found in in a Wireless LAN , where each wireless client connects to the central Wireless access point .
- A ring network : each node is connected to its left and right neighbor node, such that all nodes are connected and that each node can reach each other node by traversing nodes left- or rightwards. The Fiber Distributed Data Interface (FDDI) made use of such a topology.
- A mesh network : each node is connected to an arbitrary number of neighbors in such a way that there is at least one traversal from any node to any other.
- A fully connected network: each node is connected to every other node in the network.
Note that the physical layout of the nodes in a network may not necessarily reflect the network topology. As an example, with FDDI, the network topology is a ring (actually two counter-rotating rings), but the physical topology is a star, because all neighboring connections are routed via a central physical location.
An overlay network is a virtual computer network that is built on top of another network. Nodes in the overlay are connected by virtual or logical links, each of which corresponds to a path, perhaps through many physical links, in the underlying network. The topology of the overlay network may (and often does) differ from that of the underlying one.
Basic hardware components
For example, many peer-to-peer networks are overlay networks because they are organized as nodes of a virtual system of links run on top of the Internet. The Internet was initially built as an overlay on the telephone network .
The most striking example of an overlay network, however, is the Internet itself: At the IP layer, each node can reach any other by a direct connection to the desired IP address, thereby creating a fully connected network; the underlying network, however, is composed of a mesh-like interconnect of subnetworks of varying topologies (and, in fact, technologies). Address resolution and routing are the means which allows the mapping of the fully-connected IP overlay network to the underlying ones.
Overlay networks have been around since the invention of networking when computer systems were connected over telephone lines using modems, before any data network existed.
Another example of an overlay network is a distributed hash table, which maps keys to nodes in the network. In this case, the underlying network is an IP network, and the overlay network is a table (actually map) indexed by keys.
Overlay networks have also been proposed as a way to improve Internet routing, such as through quality of service guarantees to achieve higher-quality streaming media. Previous proposals such as IntServ, DiffServ, and IP Multicast have not seen wide acceptance largely because they require modification of all routers in the network. On the other hand, an overlay network can be incrementally deployed on end-hosts running the overlay protocol software, without cooperation from Internet service providers. The overlay has no control over how packets are routed in the underlying network between two overlay nodes, but it can control, for example, the sequence of overlay nodes a message traverses before reaching its destination.
For example, Akamai Technologies manages an overlay network that provides reliable, efficient content delivery (a kind of multicast). Academic research includes End System Multicast and Overcast for multicast; RON (Resilient Overlay Network) for resilient routing; and OverQoS for quality of service guarantees, among others.
Apart from the physical communications media themselves as described above, networks comprise additional basic hardware building blocks interconnecting their terminals, such as network interface cards (NICs), hubs, bridges, switches, and routers.
Network interface cards
A network card, network adapter, or NIC (network interface card) is a piece of computer hardware designed to allow computers to physically access a networking medium. It provides a low-level addressing system through the use of MAC addresses.
Each Ethernet network interface has a unique MAC address which is usually stored in a small memory device on the card, allowing any device to connect to the network without creating an address conflict. Ethernet MAC addresses are composed of six octets. Uniqueness is maintained by the IEEE, which manages the Ethernet address space by assigning 3-octet prefixes to equipment manufacturers. The list of prefixes is publicly available. Each manufacturer is then obliged to both use only their assigned prefix(es) and to uniquely set the 3-octet suffix of every Ethernet interface they produce.
Repeaters and hubs
A repeater is an electronic device that receives a signal, cleans it of unnecessary noise, regenerates it, and retransmits it at a higher power level, or to the other side of an obstruction, so that the signal can cover longer distances without degradation. In most twisted pair Ethernet configurations, repeaters are required for cable that runs longer than 100 meters. A repeater with multiple ports is known as a hub. Repeaters work on the Physical Layer of the OSI model. Repeaters require a small amount of time to regenerate the signal. This can cause a propagation delay which can affect network communication when there are several repeaters in a row. Many network architectures limit the number of repeaters that can be used in a row (e.g. Ethernet's 5-4-3 rule).
Today, repeaters and hubs have been made mostly obsolete by switches (see below).
A network bridge connects multiple network segments at the data link layer (layer 2) of the OSI model. Bridges broadcast to all ports except the port on which the broadcast was received. However, bridges do not promiscuously copy traffic to all ports, as hubs do, but learn which MAC addresses are reachable through specific ports. Once the bridge associates a port and an address, it will send traffic for that address to that port only.
Bridges learn the association of ports and addresses by examining the source address of frames that it sees on various ports. Once a frame arrives through a port, its source address is stored and the bridge assumes that MAC address is associated with that port. The first time that a previously unknown destination address is seen, the bridge will forward the frame to all ports other than the one on which the frame arrived.
Bridges come in three basic types:
Local bridges: Directly connect local area networks (LANs)
Remote bridges: Can be used to create a wide area network (WAN) link between LANs. Remote bridges, where the connecting link is slower than the end networks, largely have been replaced with routers.
Wireless bridges: Can be used to join LANs or connect remote stations to LANs.
A network switch is a device that forwards and filters OSI layer 2 datagrams (chunks of data communication) between ports (connected cables) based on the MAC addresses in the packets. A switch is distinct from a hub in that it only forwards the frames to the ports involved in the communication rather than all ports connected. A switch breaks the collision domain but represents itself as a broadcast domain. Switches make forwarding decisions of frames on the basis of MAC addresses. A switch normally has numerous ports, facilitating a star topology for devices, and cascading additional switches. Some switches are capable of routing based on Layer 3 addressing or additional logical levels; these are called multi-layer switches. The term switch is used loosely in marketing to encompass devices including routers and bridges, as well as devices that may distribute traffic on load or by application content (e.g., a Web URL identifier).
A router is an internetworking device that forwards packets between networks by processing information found in the datagram or packet (Internet protocol information from Layer 3 of the OSI Model). In many situations, this information is processed in conjunction with the routing table (also known as forwarding table). Routers use routing tables to determine what interface to forward packets (this can include the "null" also known as the "black hole" interface because data can go into it, however, no further processing is done for said data).
A firewall is an important aspect of a network with respect to security. It typically rejects access requests from unsafe sources while allowing actions from recognized ones. The vital role firewalls play in network security grows in parallel with the constant increase in 'cyber' attacks for the purpose of stealing/corrupting data, planting viruses, etc.
Network performance refers to the service quality of a telecommunications product as seen by the customer. It should not be seen merely as an attempt to get "more through" the network.
The following list gives examples of Network Performance measures for a circuit-switched network and one type of packet-switched network, viz. ATM:
- Circuit-switched networks: In circuit switched networks, network performance is synonymous with the grade of service . The number of rejected calls is a measure of how well the network is performing under heavy traffic loads. Other types of performance measures can include noise, echo and so on.
- ATM: In an Asynchronous Transfer Mode (ATM) network, performance can be measured by line rate, quality of service (QoS), data throughput, connect time, stability, technology, modulation technique and modem enhancements.
There are many different ways to measure the performance of a network, as each network is different in nature and design. Performance can also be modelled instead of measured; one example of this is using state transition diagrams to model queuing performance in a circuit-switched network. These diagrams allow the network planner to analyze how the network will perform in each state, ensuring that the network will be optimally designed.
In the field of networking, the area of network security consists of the provisions and policies adopted by the network administrator to prevent and monitor unauthorized access, misuse, modification, or denial of the computer network and network-accessible resources. Network Security is the authorization of access to data in a network, which is controlled by the network administrator. Users are assigned an ID and password that allows them access to information and programs within their authority. Network Security covers a variety of computer networks, both public and private that are used in everyday jobs conducting transactions and communications among businesses, government agencies and individuals. Networks can be private, such as within a company, and others which might be open to public access. Network Security is involved in organization, enterprises, and all other type of institutions. It does as its titles explains, secures the network. Protects and oversees operations being done.
In computer networking: Resilience is the ability to provide and maintain an acceptable level of service in the face of faults and challenges to normal operation.
Views of networks
Users and network administrators typically have different views of their networks. Users can share printers and some servers from a workgroup, which usually means they are in the same geographic location and are on the same LAN, whereas a Network Administrator is responsible to keep that network up and running. A community of interest has less of a connection of being in a local area, and should be thought of as a set of arbitrarily located users who share a set of servers , and possibly also communicate via peer-to-peer technologies.
Network administrators can see networks from both physical and logical perspectives. The physical perspective involves geographic locations, physical cabling, and the network elements (e.g., routers, bridges and application layer gateways that interconnect the physical media. Logical networks, called, in the TCP/IP architecture, subnets, map onto one or more physical media. For example, a common practice in a campus of buildings is to make a set of LAN cables in each building appear to be a common subnet, using virtual LAN (VLAN) technology.
Both users and administrators will be aware, to varying extents, of the trust and scope characteristics of a network. Again using TCP/IP architectural terminology, an intranet is a community of interest under private administration usually by an enterprise, and is only accessible by authorized users (e.g. employees). Intranets do not have to be connected to the Internet, but generally have a limited connection. An extranet is an extension of an intranet that allows secure communications to users outside of the intranet (e.g. business partners, customers).
Unofficially, the Internet is the set of users, enterprises, and content providers that are interconnected by Internet Service Providers (ISP). From an engineering viewpoint, the Internet is the set of subnets, and aggregates of subnets, which share the registered IP address space and exchange information about the reachability of those IP addresses using the Border Gateway Protocol. Typically, the human-readable names of servers are translated to IP addresses, transparently to users, via the directory function of the Domain Name System (DNS).
Over the Internet, there can be business-to-business (B2B), business-to-consumer (B2C) and consumer-to-consumer (C2C) communications. Especially when money or sensitive information is exchanged, the communications are apt to be secured by some form of communications security mechanism. Intranets and extranets can be securely superimposed onto the Internet, without any access by general Internet users and administrators, using secure Virtual Private Network (VPN) technology.
- Computer network definition
- Michael A. Banks (2008). On the way to the web: the secret history of the internet and its founders . Apress. p. 1. ISBN 9781430208693 .
- Christos J. P. Moschovitis (1998). History of the Internet: a chronology, 1843 to the present . ABC-CLIO. p. 36. ISBN 9781576071182 .
- Chris Sutton. "Internet Began 35 Years Ago at UCLA with First Message Ever Sent Between Two Computers" . UCLA . Archived from the original on 2008-03-08.
- The National Association for Amateur Radio: Broadband Over Powerline
- "The Likelihood and Extent of Radio Frequency Interference from In-Home PLT Devices" . Ofcom . Retrieved 18 June 2011 .
- Mobile Broadband Wireless connections (MBWA)
- Bergen Linux User Group's CPIP Implementation
- Interplanetary Internet , 2000 Third Annual International Symposium on Advanced Radio Technologies, A. Hooke, September 2000
- Martin, Thomas. "Design Principles for DSL-Based Access Solutions" . Retrieved 18 June 2011 .
- "personal area network (PAN)" . Retrieved 2011-01-29 .
- New global standard for fully networked home , ITU-T Press Release
- IEEE P802.3ba 40Gb/s and 100Gb/s Ethernet Task Force
- D. Andersen, H. Balakrishnan, M. Kaashoek, and R. Morris . Resilient Overlay Networks . In Proc. ACM SOSP, Oct. 2001.
- "Define switch." . www.webopedia.com . Retrieved 2008-04-08 .
- "Basic Components of a Local Area Network (LAN)" . NetworkBits.net . Retrieved 2008-04-08 .
- ITU-T Study Group 2, Teletraffic Engineering Handbook (PDF), Retrieved on 2005-02-13.
- Telecommunications Magazine Online , Americas January 2003, Issue Highlights, Online Exclusive: Broadband Access Maximum Performance, Retrieved on 2005-02-13.
- "State Transition Diagrams" . Retrieved 2003-07-13 .
- Simmonds, A; Sandilands, P; van Ekert, L (2004). "An Ontology for Network Security Attacks". Lecture Notes in Computer Science 3285 : 317323. doi : 10.1007/978-3-540-30176-9_41 .
- The ResiliNets Research Initiative definition of resilience .
- a b RFC 2547
- Shelly, Gary, et al. "Discovering Computers" 2003 Edition
- Cisco Systems, Inc., (2003, March 14). CCNA: network media types. Retrieved from ciscopress.com
- Wendell Odom,Rus Healy, Denise Donohue. (2010) CCIE Routing and Switching. Indianapolis, IN: Cisco Press
- Andrew S. Tanenbaum , Computer Networks ( ISBN 0-13-349945-6 ).
- Important publications in computer networks
- Vinton G. Cerf "Software: Global Infrastructure for the 21st Century"
- Meyers, Mike, "Mike Meyers' Certifcation Passport: Network+" ISBN 0072253487 "
- Odom, Wendall, "CCNA Certification Guide"
- Network Communication Architecture and Protocols: OSI Network Architecture 7 Layers Model. | 1 | 4 |
<urn:uuid:db401b6b-f2d7-41a1-8233-2681ab403720> | Citations with the tag: AFRICAN Americans -- Education
Results 1 - 50
- Enhancing the cultural identity of early adolescent male...
Bass, Christopher K.; Coleman, Hardin K. // Professional School Counseling; Dec97 Special Issue, Vol. 1 Issue 2, p48
Describes a school-based, Africentric program designed to prepare African American males to take full advantage of the educational opportunities available within the educational system. Methods used in the program; Design and procedure; Average of teachers' ratings of students' behavior. ...
- Black studies: An education or the reflection of a crisis?
Early, Gerald // Academic Questions; Fall98, Vol. 11 Issue 4, p12
Presents the essay `Black Studies: An Education or the Reflection of a Crisis,' delivered at the seventh general conference of the National Association of Scholars in New Orleans, Louisiana on December 12, 1997.
- Colleges and universities where African Americans constitute at lease 25 percent of studen...
Early, Gerald // Black Issues in Higher Education; 11/13/97, Vol. 14 Issue 19, p26
Presents a listing of colleges and universities in the United States, in which African Americans constitute at least 25 percent of student enrollment.
- Washington update.
Dervarics, Charles // Black Issues in Higher Education; 04/02/98, Vol. 15 Issue 3, p4
Presents information on happenings in higher education of Blacks in the United States. Details on the challenges which educators working on the reauthorization of the Higher education Act (HEA) face; Indepth look at a Senate legislation which would aid in preserving and restoring historic...
- What's new.
Dervarics, Charles // Black Issues in Higher Education; 05/28/98, Vol. 15 Issue 7, p9
Presents news development about programs, accreditations and opportunities for African Americans' higher education in the United States as of May 28, 1998. Simmons College's launching of three novel programs; Virginia Polytechnic Institute and State University's establishment of the Center for...
- Celebrating and deconstructing our educational progress.
Malveaux, Julianne // Black Issues in Higher Education; 07/23/98, Vol. 15 Issue 11, p64
Comments on the US Census Bureau report `Educational Attainment in the United States.' Upward trend in the educational attainment of African Americans in 1997; High school completion rate for Whites; Role of affirmative action in educational attainment; Difference between high school completion...
- Programs, accreditations & opportunities.
Malveaux, Julianne // Black Issues in Higher Education; 10/01/98, Vol. 15 Issue 16, p11
Presents news items pertaining to Afro-American education as of October 1998. Philander Smith College's establishment of an academic discipline dedicated to the study of Black families; Amoco Foundation's launching of the Minority Engineering Recruitment and Retention Initiative.
- Afrocentric weapons in the recruitment and retention wars.
Hikes, Zenobia Lawrence // Black Issues in Higher Education; 10/01/98, Vol. 15 Issue 16, p84
Discusses issues pertaining to Afro-American education. Parents' interest in an educational institution's academic reputation specific to Afro-Americans; Conservative right's disingenuous call for a color-blind society; American institutions as products of its citizen's cultural differences and...
- What can we do to help black children beat the odds?
Edelman, Marian Wright // Black Collegian; Apr Special Anniversary Issue, Vol. 21, p112
Stresses the responsibility of black adults in inculcating the values that will help their youth beat the odds as they make their way through life. Importance for blacks to take their education seriously; Necessity of planning and economic stability in guaranteeing marriage; Working toward...
- America's classrooms.
Edelman, Marian Wright // Black Enterprise; Sep84, Vol. 15 Issue 2, p34
Increasingly, many educators now are coming to the conclusion that the persistent gap between Black and White educational achievement has less to do with teaching techniques than it does with racial or class stereotypes. All of the panaceas in the world can accomplish little for Black...
- Black independent schools.
Giles, Dari // Essence (Essence); Sep95, Vol. 26 Issue 5, p124
Focuses on Black-owned schools. Diverse day schools; Boarding schools; Tuition fees.
- Blacks' high school dropout rate has declined, report.
Giles, Dari // Jet; 7/9/90, Vol. 78 Issue 13, p22
Reports on the significant drop in the annual dropout rate for black high school students during the 1980s, from 9.6 percent to 6.3 percent, according to a new Census report.
- Record number of black students earned doctorates in 1995, study reveals.
Giles, Dari // Jet; 07/01/96, Vol. 90 Issue 7, p38
Cites a report from the National Research Council which shows that a record number of Afro-Americans earned doctorate degrees in 1995. Comparison with 1994 figures; Blacks as still underrepresented in the doctorate pool; Afro-American women as earning the majority of all non-science doctorates.
- African-American children and the educational process: Alleviating cultural discontinuity through...
Allen, Brenda A.; Boykin, A. Wade // School Psychology Review; 1992, Vol. 21 Issue 4, p586
Attempts to establish that contextual factors informed by certain postulated cultural experiences can influence the cognitive performance of Afro-American children. Model designed to explain link between culture and cognition; Rationale for the school failure exhibited; Enhancement of task...
- The problems of the twenty-first century: Black Studies...
Conyers Jr., James L. // Western Journal of Black Studies; Summer97, Vol. 21 Issue 2, p117
No abstract available.
- Black parents want focus on academics.
Bradley, Ann // Education Week; 08/05/98, Vol. 17 Issue 43, p1
Reports on a survey conducted by Public Agenda, a nonpartisan public-opinion research firm in New York City, which indicated that African American parents want public schools to focus on achievement, rather than on racial diversity and integration. Percentage of black parents that favored...
- Education today.
Bradley, Ann // New York Amsterdam News; 05/10/97, Vol. 88 Issue 19, p35
Presents information on news relating to education of blacks in the State of New York. Doctor Rudolph F. Crew will be honored at the Community Night; Host of the event; Venue for the event; Who will benefit from proceeds gained from the events; Wanda Cruz Lopez awarded her master's degree from...
- All things not being equal: The case for race separate schools.
Steele, Roberta L. // Case Western Reserve Law Review; Winter93, Vol. 43 Issue 2, p591
Supports the adoption of the African-American Immersion School on a trial basis. Immersion School as a race-exclusive but co-educational institution; Goals of the school; Concerns about the isolation of African-American children from mainstream society.
- Many in politics, civil rights movements and education have agendas to which truth and the facts are
Sowell, Thomas // Enterprise/Salt Lake City; 8/21/95, Vol. 25 Issue 8, p15
Argues that those in politics, in the civil rights movements and in the educational establishment have different agendas regarding issues affecting the education of Afro-Americans.
- Why blacks are not going to college.
Bracey, G.W. // Phi Delta Kappan; Feb92, Vol. 73 Issue 6, p494
Reports on a study published in the October 1991 issue of `Sociology of Education' by University of Wisconsin, Madison researchers Robert Hauser and Douglas Anderson. Main finding is that college enrollments for blacks declined in the late 1970s and the 1980s because of lack of money.
- African-American immersion schools in Milwaukee: A view from the inside.
Leake, D.; Leaker, B. // Phi Delta Kappan; Jun92, Vol. 73 Issue 10, p783
Discusses African-American immersion schools. Typical reaction is that such schools are segregationist; Real aim is to create population of academically competent, self-confident individuals; Key concepts derived from African concept of educational social process.
- Frustrations of an African-American parent: A personal and professional account.
Boutte, G.S. // Phi Delta Kappan; Jun92, Vol. 73 Issue 10, p786
Discusses the author's experiences with her daughter's schooling in a predominantly white elementary school. Did not want daughter `tracked' as below average; Urges teachers to examine racial beliefs and practices to eliminate discriminatory behavior; Racial issues can be defused by teachers...
- A tale not widely told.
Lewis, Anne C. // Phi Delta Kappan; Nov92, Vol. 74 Issue 3, p196
Presents some information about black youngsters in the nation's schools. Blacks have been steadily narrowing gap between themselves and whites in math and science proficiency; Average reading proficiency of blacks higher than 20 years ago; Between 1976 and 1992 mean scores of black students on...
- Self-help at its best.
Lewis, Anne C. // Newsweek; 3/16/87, Vol. 109 Issue 11, p8
Opinion. A senior admissions officer at Harvard University says the scholastic credentials of ghetto students are at their lowest in 17 years. Too few programs are being developed by black America itself to help relieve the problem. Black youngsters must be convinced that academic...
- News wrap-up.
Lewis, Anne C. // Black Issues in Higher Education; 8/08/96, Vol. 13 Issue 12, p26
Presents higher education news relevant to Afro-Americans for the week of August 8, 1996. Retirement of Dr. Cordell Wynn from Stillman College; Number of General Educational Development recipients; Success of the fundraising efforts for the College Fund/United Negro College Fund. INSETS: A...
- News wrap-up.
Fields, Cheryl D. // Black Issues in Higher Education; 12/26/96, Vol. 13 Issue 22, p33
Reports on developments related to higher education of African Americans as of December 26, 1996. Includes Spellman College's search for a new college president; Anita Hill's plans to become a visiting professor at the University of California-Berkeley; Michael Eric Dyson's appointment as...
- News wrap-up.
Fields, Cheryl D. // Black Issues in Higher Education; 01/09/97, Vol. 13 Issue 23, p35
Reports on news and developments related to Afro-American higher education. Includes the recognition of Ebonics or Black English; Zell Miller's receiving the John A. Griffin Award for Advancing Equity in Education from the Southern Education Foundation; Lou Rawls Parade of Stars to benefit The...
- Free education students of African descent.
Person-Lynn, Kwaku // Black Issues in Higher Education; 01/09/97, Vol. 13 Issue 23, p108
Proposes Education Free Zones which would allow Afro-American students to attend designated colleges or universities for free in the United States. Compensation of people of African descent for the years of free labor provided during slavery in America; Comments on the movement for reparations;...
- Washington update.
Dervarics, Charles // Black Issues in Higher Education; 01/23/97, Vol. 13 Issue 24, p6
Presents government rules related to higher education of blacks. Changes in the Congressional Black Caucus and other panels for 1997; Participation of historically and predominantly black colleges and universities in a federal program to recruit students as reading tutors for disadvantaged...
- News wrap-up.
Dervarics, Charles // Black Issues in Higher Education; 01/23/97, Vol. 13 Issue 24, p33
Presents developments related to higher education of blacks. Job cuts at the University of the District of Columbia in Washington; Refusal of two female cadets who filed allegations of harassment at the Citadel in South Carolina to return to the military college; Appointment of Doug Williams as...
- News wrap-up.
Dervarics, Charles // Black Issues in Higher Education; 02/20/97, Vol. 13 Issue 26, p50
Presents news items related to the higher education of Afro-Americans as of February 20, 1997. Includes Attorney General Dan Morales' views on a court decision prohibiting the use of race as a factor in admissions or financial aid decisions in Texas schools; Appropriation of funds for...
- Patterson Research Institute reports on educational profile of African Americans.
Ruffins, Paul // Black Issues in Higher Education; 03/20/97, Vol. 14 Issue 2, p14
Details the Patterson Research Institute's report on the educational profile of African Americans. Exploration of the participation of African Americans in postsecondary education; Improvement in black enrollment in higher education; Increase in black enrollment in first professional schools;...
- News wrap-up.
Ruffins, Paul // Black Issues in Higher Education; 03/20/97, Vol. 14 Issue 2, p37
Presents news items relating to African-Americans in higher education as of March 20, 1997. US President Bill Clinton's speech to the American Council on Education; Hearing on Grambling State University's allegations on illegal out-of-season practices and tryouts; Affirmative action suits filed...
- Positions directory.
Ruffins, Paul // Black Issues in Higher Education; 05/29/97, Vol. 14 Issue 7, p67
Presents a directory relating to black issues in higher education.
- African American doctoral degress--all disciplines combined.
Ruffins, Paul // Black Issues in Higher Education; 07/24/97, Vol. 14 Issue 11, p24
Presents statistical information on African American doctoral degrees.
- Black private schools.
Tice, Terrence N. // Education Digest; Feb1993, Vol. 58 Issue 6, p52
Cites the Spring 1992 `Journal of Negro Education' which was devoted to `African Americans and Independent Schools: Status, Attainment, and Issues.' Traditions; Following the Civil War; Organization by the Nation of Islam since 1930s; Details.
- Black America and school choice: Charting a new course.
Barnes, Robin D. // Yale Law Journal; Jun97, Vol. 106 Issue 8, p2375
Examines the educational opportunities for Afro-Americans. Role of autonomy in accessing quality education; Failure of school desegregation; Fundamental characteristics for creating successful schools; Potential of charter schools; Importance of involvement of parents in their children's education.
- The status of African Americans in higher education.
Stevens, Joann // Liberal Education; Summer97, Vol. 83 Issue 3, p40
Comments on the trends relevant to the gains in higher education attained by Afro-Americans in the 1990s. Research findings indicating the difficulties and challenges facing Afro-Americans in higher education; Instances of discrimination in college access; Issues relevant to socioeconomic...
- 50 years of black education.
Jones, Lisa C. // Ebony; Sep1995, Vol. 50 Issue 11, p69
Focuses on the education of Afro-Americans. Historical background; Education of black professors; Racial discrimination; State of college campuses; Progress made by Black College students in the 1970s; Student rallies.
- Defining the Situation.
Stewart, Mac A. // Negro Educational Review; Jan2005, Vol. 56 Issue 1, p1
Presents an introduction to the January 2005 issue of the journal "Negro Educational Review."
- Why is black educational achievement rising?
Armor, David J. // Public Interest; Summer92, Issue 108, p65
Explores the causes of black achievement gains. Black and white achievement trends; Misleading statistics; School desegregation; Compensatory education; Socioeconomic status; Implications for education policy; ; Details.
- Bridges to Africa.
Bailey, Willia // Educational Leadership; Feb1994, Vol. 51 Issue 5, p86
Focuses on the Building Bridges Project initiated by the author to bridge African American elementary school students to their African roots. Importance of ancestral pride in the self-esteem of children; Description of the project; List of letters sent to and received from Africa.
- More blacks attending college, but still underrepresented.
Bailey, Willia // Jet; 3/17/97, Vol. 91 Issue 17, p13
Discusses the findings of the report `The Status of Education in Black America, Volume I: Higher and Adult Education. Statistics relative to black attendance at United States colleges.
- Big boost in college degrees to blacks reported.
Sandham, Jessica L. // Education Week; 03/05/97, Vol. 16 Issue 23, p16
Reports that African-Americans are entering college and succeeding according to the book `The Status of Education in Black America.' Increase in the number of blacks awarded with bachelor's degrees; Gains attributed to black women; Obstacles faced by African-Americans.
- `Anti-achievement attitude' among African-Americans challenged.
Schmidt, Peter // Education Week; 5/11/94, Vol. 13 Issue 33, p10
Reports on results of a study on African-Americans in Indiana schools by the Indiana Youth Institute. Number of people surveyed; Tendency of young African-Americans to undervalue their achievements; Racial integration of schools.
- Affirmative Action and the JNE.
Johnson, Sylvia T. // Journal of Negro Education; Winter/Spring2000, Vol. 69 Issue 1/2, p1
Presents an overview of the articles in the 2000 issue of the 'Journal of Negro Education' in the United States. Focus on research and policy studies on access of African Americans to educational opportunities; Redefinition of the logic of affirmative action by opponents; Factors influencing...
- Race, the College Classroom, and Service Learning: A Practitioner's Tale.
Philipsen, Maike Ingrid // Journal of Negro Education; Spring2003, Vol. 72 Issue 2, p230
Recounts how service learning might serve as a tool to facilitate meaningful discussions on race in an undergraduate course on Social Foundations Education. Dynamics that unfold whenever race is discussed in the classroom; Impact of service learning on students' thinking about race; Benefits...
- Reclaiming Our Communities.
Ogwude, Emmanuel // Diverse: Issues in Higher Education; 8/24/2006, Vol. 23 Issue 14, p7
A letter is presented encouraging African-American teachers who have left communities to find greener pastures to return and help in mentoring African American youth.
- Roving camera.
Gilbert, James // New York Amsterdam News; 6/17/95, Vol. 86 Issue 24, p15
Presents opinion of five Black citizens regarding the impact of the of reduction educational assistance on their future. Participation in Work Employment Benefits program; Higher rate of student dropout; Devastation of minorities.
- Students aim for the gold in the Olympics of the mind.
Moorer, Talise D. // New York Amsterdam News; 03/18/99, Vol. 90 Issue 12, p10
Reports on holding of the 1999 Afro-Academic, Cultural, Technological and Scientific Olympics (ACT-SO), which aims to promote academic excellence among blacks. Blacks' reputation in sports and the entertainment industry; Minority high school students' participation in the program; ACT-SO's... | 1 | 47 |
<urn:uuid:b4ad7483-7dce-4c72-a3a0-e6bf08e5841b> | Scottish literature is literature written in Scotland or by Scottish writers. Literature is the Art of written works Literally translated the word means "acquaintance with letters" (from Latin littera letter Scotland ( Gaelic: Alba) is a Country in northwest Europethat occupies the northern third of the island of Great Britain. List of Scottish writers is an incomplete alphabetical list of Scottish Writers This list includes writers of all genres writing in English, Scots It includes literature written in English, Scottish Gaelic, Scots, Brythonic, French, Latin and any other language in which a piece of literature was ever written within the boundaries of modern Scotland. English is a West Germanic language originating in England and is the First language for most people in the United Kingdom, the United States Scottish Gaelic ( Gàidhlig) is a member of the Goidelic branch of Celtic languages. Scots ( The Scots leid) refers to Anglic varieties derived from early northern Middle English spoken in parts of Scotland and Northern The Brythonic languages (or Brittonic languages or British languages) form one of the two branches of the Insular Celtic language family the other being French ( français,) is a Romance language spoken around the world by 118 million people as a native language and by about 180 to 260 million people Latin ( lingua Latīna, laˈtiːna is an Italic language, historically spoken in Latium and Ancient Rome.
The people of northern Britain spoke forms of Celtic languages. The Celtic languages are descended from Proto-Celtic, or "Common Celtic" a branch of the greater Indo-European Language family. Much of the earliest Welsh literature was actually composed in or near the country we now call Scotland, as Brythonic speech (the ancestor of Welsh) was not then confined to Wales and Cornwall. The term Welsh literature may be used to refer to any Literature originating from Wales or by Welsh writers. The Brythonic languages (or Brittonic languages or British languages) form one of the two branches of the Insular Celtic language family the other being While all modern scholarship indicates that the Picts spoke a Brythonic language (based on surviving placenames, personal names and historical evidence), none of their literature seems to have survived into the modern era. Pictish is a term used for the Extinct language or languages thought to have been spoken by the Picts, the people of northern and central Scotland
Some of the earliest literature known to have been composed in Scotland includes:
The ethnic language of the Scots was Gaelic. Northumbrian, also known as Ynglis and Inglis, is a Dialect of the Old English language spoken in the Angle Kingdom of Northumbria The Goidelic languages, (also sometimes called particularly in colloquial situations the Gaelic languages or collectively Gaelic) historically formed a Dialect Gael was actually what the word Scot meant in English before c. 1500. Between c. 1200 and c. 1700 the learned Gaelic elite of both Scotland and Ireland shared a literary form of Gaelic. Ireland (pronounced /ˈaɾlənd/ Éire) is the third largest island in Europe, and the twentieth-largest island in the world It is possible that more Middle Irish literature was written in medieval Scotland than is often thought, but has not survived because the Gaelic literary establishment of eastern Scotland died out before the 14th century. Middle Irish is the name given by historical philologists to the Goidelic language used from the 10th to 12th centuries it is therefore a contemporary Some Gaelic texts written in Scotland has survived in Irish sources. Gaelic literature written in Scotland before the 14th century includes the Lebor Bretnach, the product of a flourishing Gaelic literary establishment at the monastery of Abernethy. Abernethy ( Obar Neithich) is a village in Perth and Kinross, Scotland, situated eight Miles south-east of Perth.
The first known text to be composed in the form of northern Middle English spoken in the Lowlands (now called Early Scots) didn't appear until the fourteenth century. Early Scots describes the emerging literary language of the Northern Middle English speaking parts of Scotland in the period before 1450 It is clear from John Barbour, and a plethora of other evidence, that the Fenian Cycle flourished in Scotland. John Barbour (?1320 &ndash March 13, 1395) was a Scottish Poet and the first major literary voice to write in Scots, the vernacular The Fenian Cycle or Fiannaidheacht (modern Irish Fiannaíocht) also known as the Fionn Cycle, Finn Cycle, Fianna Cycle, Finnian There are allusions to Gaelic legendary characters in later Anglo-Scottish literature (oral and written).
In the 13th century, French flourished as a literary language, and produced the Roman de Fergus, the earliest piece of non-Celtic vernacular literature to come from Scotland. French ( français,) is a Romance language spoken around the world by 118 million people as a native language and by about 180 to 260 million people A literary language is a register of a Language that is used in Literary Writing. The Roman de Fergus is an Arthurian romance written in Old French, in Scotland, at the beginning of the 13th century, by Vernacular refers to the Native language of a country or a locality Moreover, many other stories in the Arthurian Cycle, written in French and preserved only outside Scotland, are thought by some scholars (D. The Matter of Britain is a name given collectively to the Legends that concern the Celtic and legendary History of Great Britain, especially those D. R. Owen for instance) to have been written in Scotland.
In addition to French, Latin too was a literary language. Latin ( lingua Latīna, laˈtiːna is an Italic language, historically spoken in Latium and Ancient Rome. Famous examples would be the Inchcolm Antiphoner and the Carmen de morte Sumerledi, a poem which exults triumphantly the victory of the citizens of Glasgow over Somailre mac Gilla Brigte. Somerled ( Old Norse Sumarliði, Scottish Gaelic Somhairle, commonly Anglicized from Gaelic as Sorley) was a military and political And of course, the most important medieval work written in Scotland, the Vita Columbae, was also written in Latin.
Among the earliest Middle English or Early Scots literature is John Barbour's Brus (14th century), Wyntoun's Kronykil and Blind Harry's Wallace (15th century). Middle English is the name given by Historical linguistics to the diverse forms of the English language spoken between the Norman invasion of Early Scots describes the emerging literary language of the Northern Middle English speaking parts of Scotland in the period before 1450 Andrew Wyntoun, known as Andrew of Wyntoun (c 1350 &ndash c 1423 was a Scottish poet a canon and Prior of Loch Leven on St Serf's Blind Harry (c 1440 &ndash 1492 also known as Harry (also spelt Hary) or Henry the Minstrel, is renowned as the earliest surviving lengthy source for From the 15th century much Middle Scots literature was produced by writers based around the royal court in Edinburgh and the University of St Andrews. Middle Scots describes the language of Anglic Lowland Scotland in the period 1450 to 1700 Edinburgh ( ˈɛdɪnb(ərə Dùn Èideann) is the Capital of Scotland and is its second largest city after Glasgow. The University of St Andrews is the oldest University in Scotland and third oldest in the English-speaking world, having been founded between Alexander Montgomerie, the 16th century poet, for example, was in the service of King James VI. Alexander Montgomerie (c 1550? - 1598 was a Scottish Poet. Life and works The outstanding poet of the later sixteenth century in Scotland Montgomerie James VI and I (19 June 1566 – 27 March 1625 was King of Scotland as James VI, and King of England and King of Ireland as James James I of Scotland himself wrote [[The Kingis Quair. James I ( December 10, 1394 &ndash February 21, 1437) was nominal King of Scots from April 4, 1406, and At the request of James V of Scotland, John Bellenden translated Hector Boece's Historia Gentis Scotorum as Chroniklis of Scotland (published 1536). James V (10 April 1512 &ndash 14 December 1542 was King of Scots from 9 September 1513 until his death John Bellenden or Ballantyne (flourished 1533-1587? of Moray was a Scottish writer of the 16th century. Hector Boece (sometimes spelt Boethius, or Boyce) (1465-1536 was a Scottish Philosopher. He also translated the first five books of Livy. Titus Livius (traditionally 59 BC &ndash AD 17 known as Livy in English, was a Roman historian who wrote a monumental history of Rome These remain the earliest existing specimens of Scottish literary prose
Versions of popular continental romances were also produced, for example: Launcelot o the Laik and The Buik o Alexander. As a Literary genre of High culture, romance or chivalric romance refers to a style of heroic Prose and verse Narrative
In the early 16th century, Gavin Douglas produced a Scots translation of the Aeneid. Gavin Douglas (c 1474 &ndash September 1522 was a Scottish Bishop, Makar and Translator. For the group of nine Ancient Egyptian deities see Ennead. The Aeneid (əˈniːɪd in Chaucerian, classical and French literary language continued to influence Scots literature up until the Reformation. Geoffrey Chaucer (c 1343 – 25 October 1400? was an English author poet Philosopher, bureaucrat, courtier and Diplomat. The Protestant Reformation was a reform movement in Europe that began in 1517 though its roots lie further back in time Writers such as Robert Henryson, William Dunbar, and David Lyndsay led a golden age of Scottish literature in the 15th and early 16th centuries. Robert Henryson was a poet who flourished in Scotland in the period c This article is about the Scottish poet for other people of this name see William Dunbar (disambiguation. Sir David Lyndsay of the Mount, (also spelled Lindsay) (c 1490 &ndash c George Bannatyne collected many poems of the Middle Scots period. George Bannatyne (1545-1608 collector of Scottish poems that were very dramatic and emotional was a native of Newtyle, Angus.
The Scottish ballad tradition can be traced back to the early 17th century. A ballad is a Poem usually set to Music; thus it often is a story told in a Song. Francis James Child's compilation, The English and Scottish Popular Ballads (1882-1898) contains many examples, such as The Elfin Knight (first printed around 1610) and Lord Randal. Francis James Child ( February 1, 1825 &ndash September 11, 1896) was an American scholar educationist and folklorist, " The Elfin Knight " ( Child #2 Roud #12 is a traditional Scottish folk Ballad of which there are many versions all dealing "Lord Randall" ( Roud 10 Child 12) is a traditional Ballad consisting of dialogue
In Scotland, after the 17th century, anglicisation increased, though Lowland Scots was still spoken by the vast majority of the population of the Lowlands. Anglicisation or anglicization (see -ise vs -ize) is a process of conversion of verbal or written elements of any other language into a more comprehensible English Scots ( The Scots leid) refers to Anglic varieties derived from early northern Middle English spoken in parts of Scotland and Northern At the time, many of the oral ballads from the borders and the North East were written down. The English/Scottish border has a long and bloody history of conquest and reconquest raid and counter-raid (see Wars of Scottish Independence) Writers of the period include Robert Sempill (c. Robert Sempill (the elder (c 1530 &ndash 1595 Scottish Ballad -writer was in all probability a cadet of illegitimate birth of the noble house of Sempill or Semple 1595-1665), Lady Wardlaw and Lady Grizel Baillie. Lady Grizel Baillie ( December 25, 1665 &ndash December 6, 1746) was a Scottish songwriter
The Scottish novel developed in the 18th century, with such writers as Tobias Smollett. Tobias George Smollett (bapt 19 March, 1721 &ndash 17 September, 1771) was a Scottish author best known for his Picaresque
Allan Ramsay (1686-1758) laid the foundations of a reawakening of interest in older Scottish literature, as well as leading the trend for pastoral poetry. Allan Ramsay (15 October 1686—7 January 1758 was a Scottish Poet. The Habbie stanza was developed as a poetic form. The Burns stanza is a verse form named after the Scottish poet Robert Burns.
In 1760, James Macpherson claimed to have found poetry written by Ossian. James Macpherson (Seumas Mac a' Phearsain 27 October 1736 17 February 1796) was a Scottish Poet, known as the "translator" Ossian is the narrator and supposed author of a cycle of poems which the Scottish poet James Macpherson claimed to have translated from ancient sources in the He published translations which acquired international popularity, being proclaimed as a Celtic equivalent of the Classical epics. Classical antiquity (also the classical era or classical period) is a broad term for a long period of cultural History centered on the Mediterranean An epic is a lengthy Narrative poem, ordinarily concerning a serious subject containing details of heroic deeds and events significant to a culture or nation Fingal written in 1762 was speedily translated into many European languages, and its deep appreciation of natural beauty and the melancholy tenderness of its treatment of the ancient legend did more than any single work to bring about the Romantic movement in European, and especially in German, literature, influencing Herder and Goethe in his earlier period. Romanticism is a complex artistic literary and intellectual movement that originated in the second half of the 18th century in Western Europe, and gained strength during the The German language (de ''Deutsch'') is a West Germanic language and one of the world's major languages. Johann Gottfried von Herder ( August 25, 1744 December 18, 1803) was a German philosopher, Poet, and Literary ˈjoːhan ˈvɔlfgaŋ fɔn ˈgøːtə (in English generally ˈgɝːtə 28 August 1749 22 March 1832 was a German writer It inspired many Scottish writers, including the young Walter Scott, but it eventually became clear that the poems were not direct translations from the Gaelic but flowery adaptations made to suit the aesthetic expectations of his audience (as has been demonstrated in Derick S. Thomson, The Gaelic Sources of Macpherson's "Ossian" .
Among the best known Scottish writers are two who are strongly associated with the Romantic Era, Robert Burns and Walter Scott. Robert Burns (25 January 1759 – 21 July 1796 (also known as Rabbie Burns, Scotland's favourite son, the Ploughman Poet, the Bard of Ayrshire Sir Walter Scott 1st Baronet (15 August 1771 &ndash 21 September 1832 was a prolific Scottish Historical novelist and Poet popular throughout Scott's work is not exclusively concerned with Scotland, but his popularity in England and further abroad did much to form the modern stereotype of Scottish culture. Burns is considered Scotland's national bard; his works have only recently been edited to reflect the full breadth of their subject matter, as during the Victorian era he was censored. A national poet or national bard is a Poet held by tradition and popular acclaim to represent the identity beliefs and principles of a particular national Culture The Victorian fascination with novelty resulted in a deep interest in the relationship between modernity and cultural continuities Censorship is the suppression of speech or deletion of communicative material which may be considered objectionable harmful or sensitive as determined by a censor
Scott collected Scottish ballads and published The Minstrelsy of the Scottish Border before launching into a novel-writing career in 1814 with Waverley, often called the first historical novel. An historical novel is a Novel in which the story is set among historical events or more generally in which the time of the action predates the lifetime of the Author Other novels by Scott which contributed to the image of him as a patriot include Rob Roy. Rob Roy (1817 is a novel by Walter Scott about Frank Osbaldistone the son of an English merchant who goes to the Scottish Highlands to collect a He also wrote a History of Scotland. He was the highest earning and most popular author up to that time.
James Hogg, a writer encouraged by Walter Scott, made creative use of the Scottish religious background in producing his distinctive The Private Memoirs and Confessions of a Justified Sinner, which can be seen as introducing the "doppelgänger" theme which would be taken up later in the century in The Strange Case of Dr Jekyll and Mr Hyde. James Hogg (1770 - 21 November 1835) was a Scottish poet and Novelist who wrote in both Scots and English. The Private Memoirs and Confessions of a Justified Sinner Written by Himself A doppelgänger ( or fetch is the ghostly double of a living person a sinister form of Bilocation. Strange Case of Dr Jekyll and Mr Hyde is a Novella written by the Scottish author Robert Louis Stevenson and first published in 1886 Hogg may have borrowed his literary motif from the concept of the "co-choisiche" in Gaelic folk tradition.
In the latter half of the nineteenth century the population of Scotland had become increasingly urban and industrialised. However, the appetite amongst readers, first whetted by Walter Scott, for novels about heroic exploits in a mythical untamed Scottish landscape, encouraged yet more novels that did not reflect the realities of life in that period.
A Scottish intellectual tradition, going back at least to the philosopher David Hume can be seen reflected in the Sherlock Holmes books of Sir Arthur Conan Doyle: although Holmes is now seen as part of quintessential London, the spirit of deduction in these books is arguably more Scottish than English. David Hume (26 April 1711 25 August 1776 Scottish Philosopher, Economist, and Historian is an important figure in Western philosophy Sherlock Holmes is a famous fictional detective of the late nineteenth and early twentieth centuries who first appeared in Publication in 1887 Sir Arthur Ignatius Conan Doyle, DL (22 May 1859 – 7 July 1930 was an Anglo-Scottish Author most noted for his stories about the
Robert Louis Stevenson's most famous works are still popular and feature in many plays and films. Robert Louis Balfour Stevenson (13 November 1850–3 December 1894 was a Scottish novelist poet and travel writer, and a representative of Neo-romanticism in The short novel Strange Case of Dr Jekyll and Mr Hyde (1886) depicts the dual personality of a kind and intelligent physician who turns into a psychopathic monster after imbibing a drug intended to separate good from evil in a personality. Strange Case of Dr Jekyll and Mr Hyde is a Novella written by the Scottish author Robert Louis Stevenson and first published in 1886 Dissociative Identity Disorder ( DID) as defined by the American Psychiatric Association 's Diagnostic and Statistical Manual of Mental Disorders (DSM Kidnapped is a fast-paced historical novel set in the aftermath of the '45 Jacobite Rising, and Treasure Island is the classic pirate adventure. Kidnapped is a historical fiction Adventure novel by the Scottish author Robert Louis Stevenson. An historical novel is a Novel in which the story is set among historical events or more generally in which the time of the action predates the lifetime of the Author The Jacobite Risings were a series of uprisings rebellions and wars in the kingdoms of England, Kingdom of Scotland (later the United Kingdom of Great Britain Treasure Island is an adventure Novel by author Robert Louis Stevenson, narrating a tale of "pirates and buried gold" Piracy is Robbery committed at sea or sometimes on shore without a commission from a sovereign Nation (as distinct from Privateering
The introduction of the movement known as the "kailyard tradition" at the end of the 19th century, brought elements of fantasy and folklore back into fashion. The Kailyard school of Scottish fiction came into being at the end of the Nineteenth century as a reaction against what was seen as increasingly coarse writing representing Fantasy is a Genre that uses magic and other Supernatural forms as a primary element of plot, theme, and/or setting History The concept of folklore developed as part of the 19th century ideology of Romantic nationalism, leading to the reshaping of oral traditions to serve modern ideological J. M. Barrie is one example of this mix of modernity and nostalgia. Sir James Matthew Barrie 1st Baronet OM ( 9 May, 1860 &ndash 19 June, 1937) more commonly known as J This tradition has been viewed as a major stumbling block for Scottish literature, focusing, as it did, on an idealised, pastoral picture of Scottish culture, becoming increasingly removed from reality of life in Scotland during that period. This tradition was satirised by the author George Douglas Brown in his novel The House with the Green Shutters. George Douglas Brown ( 26 January 1869 &mdash 28 August 1902) was a Scottish Novelist, best known for his highly influential It could be argued that Scottish literature as a whole still suffers from the echoes of this tradition today.
One Scottish author whose work has become popular again is the cleric George MacDonald. George MacDonald ( 10 December 1824 &mdash 18 September 1905) was a Scottish author poet and Christian minister
In the early 20th century in Scotland, a renaissance in the use of Lowland Scots occurred, its most vocal figure being Hugh MacDiarmid. The Scottish Renaissance was a mainly Literary movement of the early to mid 20th century that can be seen as the Scottish version of Modernism. Hugh MacDiarmid is the pen name of Christopher Murray Grieve (Crìsdean Mac a' Ghreidhir (11 August 1892 Langholm - 9 September 1978 Edinburgh Other contemporaries were A.J. Cronin, Eric Linklater, Naomi Mitchison, James Bridie, Robert Garioch, Robert McLellan, Nan Shepherd, William Soutar, Douglas Young, and Sidney Goodsir Smith. Archibald Joseph Cronin (19 July 1896–6 January 1981 was Eric Robert Russell Linklater (8 March 1899 - 7 November 1974 was a British writer known for more than 20 novels as well as Short stories, travel writing and autobiography Naomi May Margaret Mitchison CBE (née Haldane 1 November 1897 Edinburgh – 11 January 1999 at Carradale) was a Scottish Novelist James Bridie, the Pseudonym used by Osborne Henry Mavor ( January 3, 1888, in Glasgow - January 29, 1951 in Robert Garioch Sutherland, ( 9 May 1909 &ndash 26 April 1981) was a Scottish Poet and Translator. Robert McLellan Scottish Dramatist and Poet, (1907-1985 was born at Linmill, a fruit farm in Kirkfieldbank in the Clyde valley the home Nan (Anna Shepherd ( February 11, 1893 - February 23, 1981) was a Scottish novelist and poet William Soutar was a Scottish Poet, born 1898 He served in the navy in World War I, and afterwards studied at the University of Edinburgh, where Douglas Young may refer to Professor Douglas Young (classicist (1913-1973 Scottish poet scholar and translator leader of the Scottish National Party (SNP from Sydney Goodsir Smith ( 26 October 1915 &ndash 15 January 1975) was a New Zealand - Scottish poet artist dramatist and novelist However, the revival was largely limited to verse and other literature. Sorley MacLean's work in Scottish Gaelic in the 1930s gave new value to modern literature in that language. Sorley MacLean ( Somhairle MacGill-Eain, sometimes "MacGilleathain" in earlier publications ( 26 October, 1911 - 24 November, The History of literature in the Modern period in Europe begins with the Age of Enlightenment and the conclusion of the Baroque period in the 18th century Edwin Muir advocated, by contrast, concentration on English as a literary language. Edwin Muir ( 15 May 1887 &ndash 3 January, 1959) was an Orcadian poet novelist and noted translator born on a farm in Deerness
The novelists Neil M. Gunn and Lewis Grassic Gibbon emphasised the real linguistic conflict occurring in Scottish life during this period in their novels in particular, The Silver Darlings and A Scots Quair respectively, where we can see the language of the protagonists grows more anglicised progressively as they move to a more industrial lifestyle. Neil Miller Gunn ( November 8, 1891 - January 15, 1973) was a prolific novelist critic and dramatist who emerged as one of the leading lights Lewis Grassic Gibbon was the Pseudonym of James Leslie Mitchell ( 13 February 1901 &ndash 7 February 1935) a
New writers of the postwar years displayed a new outwardness. Both Alexander Trocchi in the 1950s and Kenneth White in the 1960s left Scotland to live and work in France. Alexander Whitelaw Robertson Trocchi (30 July 1925 - 15 April 1984 was a Scottish novelist Kenneth White (born April 28, 1936 in Glasgow, Scotland) is a poet academic and writer Edwin Morgan became known for translations of works from a wide range of European languages. Edwin George Morgan OBE (born April 27, 1920) is a Scottish poet and translator who is associated with the Scottish Renaissance
Edwin Morgan is the current Scots Makar (the officially-appointed national poet , equivalent to a Scottish poet laureate) and also produces translations of world literature. A makar is a term from Scottish literature for a Poet or Bard, often thought of as royal court poet although the term can be more generally A national poet or national bard is a Poet held by tradition and popular acclaim to represent the identity beliefs and principles of a particular national A Poet Laureate is a Poet officially appointed by a government and is often expected to compose poems for State occasions and other government events His poetry covers the current and the controversial, ranging over political issues, and academic debates.
The tradition of fantastical fiction is continued by Alasdair Gray, whose Lanark has become a cult classic since its publication in 1981. Alasdair Gray (born 28 December 1934 is a Scottish writer and Artist. Lanark, subtitled A Life in Four Books, was the first novel of Scottish writer Alasdair Gray, and is still his best known A cult film is a Film that has acquired a highly devoted but relatively small group of fans. The 1980s also brought attention to writers capturing the urban experience and speech patterns - notably James Kelman and Jeff Torrington. James Kelman (born in Glasgow on June 9 1946) is an influential writer of Novels Jeff Torrington ( 31 December 1935 &ndash 11 May 2008) was a novelist from Glasgow in Scotland.
The works of Irvine Welsh, most famously Trainspotting, are written in a distinctly Scottish English, and reflect the underbelly of contemporary Scottish culture. Irvine Welsh (born 27 September 1958 Leith, Edinburgh) is a contemporary Scottish novelist, best known for his novel Trainspotting Trainspotting is the first novel by Scottish writer Irvine Welsh. Scottish English is the variety of English spoken in Scotland, also called Scottish Standard English. Other commercial writers, Iain Banks and Ian Rankin have also achieved international recognition for their work, and, like Welsh, have had their work adapted for film or television. Iain Menzies Banks (born on 16 February 1954 in Dunfermline, Fife) is a Scottish Writer. Ian Rankin OBE, DL, (born 28 April 1960 in Cardenden, Fife) is a Scottish Crime writer. Television ( TV) is a widely used Telecommunication medium for sending ( Broadcasting) and receiving moving Images, either monochromatic
Alexander McCall Smith, Alan Warner, and Glasgow-based novelist Suhayl Saadi, whose short story "Extra Time" is in Glaswegian Scots, have made significant literary contributions in the 21st century. Alexander (RAA "Sandy" McCall Smith, CBE, FRSE, (born August 24 1948 is a Zimbabwean born British Writer and Alan Warner (born 1964 a Scottish novelist, grew up in Connel, near Oban. Glasgow (ˈglæzgoʊ is the largest city in Scotland and third most populous in the United Kingdom Suhayl Saadi (b 23 October 1961, Beverley, Yorkshire) is a Physician, Author and Dramatist based in Glasgow Patter or Glaswegian is a Dialect spoken in and around Glasgow, Scotland.
Scottish Gaelic literature is currently experiencing a revival in print, with the publishing of An Leabhar Mòr and the Ùr Sgeul series, which encouraged new authors of poetry and fiction. An Leabhar Mòr, subtitled The Great Book of Gaelic, is a celebration of the modern Celtic muse
The Scottish literature canon has in recent years opened up to the idea of including women authors, encouraging a revisiting of Scottish women's work from past and present.
In recent years the publishing house Canongate Books has become increasingly successful, publishing Scottish literature from all eras, and encouraging new literature. Canongate Books (often simply Canongate) is a Scottish independent Publishing firm based in Edinburgh; it is named for the Canongate | 1 | 4 |
<urn:uuid:b628af59-04e7-41ae-b152-b78b52833df7> | Major depressive disorder
|Major depressive disorder|
|Classification and external resources|
Vincent van Gogh's 1890 painting
Sorrowing old man ('At Eternity's Gate')
Major depressive disorder (MDD) (also known as clinical depression, major depression, unipolar depression, unipolar disorder or recurrent depression in the case of repeated episodes) is a mental disorder characterized by episodes of all-encompassing low mood accompanied by low self-esteem and loss of interest or pleasure in normally enjoyable activities. This cluster of symptoms (syndrome) was named, described and classified as one of the mood disorders in the 1980 edition of the American Psychiatric Association's diagnostic manual. The term "depression" is ambiguous. It is often used to denote this syndrome but may refer to other mood disorders or to lower mood states lacking clinical significance. Major depressive disorder is a disabling condition that adversely affects a person's family, work or school life, sleeping and eating habits, and general health. In the United States, around 3.4% of people with major depression commit suicide, and up to 60% of people who commit suicide had depression or another mood disorder.
The diagnosis of major depressive disorder is based on the patient's self-reported experiences, behavior reported by relatives or friends, and a mental status examination. There is no laboratory test for major depression, although physicians generally request tests for physical conditions that may cause similar symptoms. The most common time of onset is between the ages of 20 and 30 years, with a later peak between 30 and 40 years.
Typically, patients are treated with antidepressant medication and, in many cases, also receive psychotherapy or counseling, although the effectiveness of medication for mild or moderate cases is questionable. Hospitalization may be necessary in cases with associated self-neglect or a significant risk of harm to self or others. A minority are treated with electroconvulsive therapy (ECT). The course of the disorder varies widely, from one episode lasting weeks to a lifelong disorder with recurrent major depressive episodes. Depressed individuals have shorter life expectancies than those without depression, in part because of greater susceptibility to medical illnesses and suicide. It is unclear whether or not medications affect the risk of suicide. Current and former patients may be stigmatized.
The understanding of the nature and causes of depression has evolved over the centuries, though this understanding is incomplete and has left many aspects of depression as the subject of discussion and research. Proposed causes include psychological, psycho-social, hereditary, evolutionary and biological factors. Long-term use and misuse of certain drugs/substances are known to cause and worsen depressive symptoms. Psychological treatments are based on theories of personality, interpersonal communication, and learning. Most biological theories focus on the monoamine chemicals serotonin, norepinephrine and dopamine, which are naturally present in the brain and assist communication between nerve cells.
Symptoms and signs
Major depression significantly affects a person's family and personal relationships, work or school life, sleeping and eating habits, and general health. Its impact on functioning and well-being has been compared to that of chronic medical conditions such as diabetes.
A person having a major depressive episode usually exhibits a very low mood, which pervades all aspects of life, and an inability to experience pleasure in activities that were formerly enjoyed. Depressed people may be preoccupied with, or ruminate over, thoughts and feelings of worthlessness, inappropriate guilt or regret, helplessness, hopelessness, and self-hatred. In severe cases, depressed people may have symptoms of psychosis. These symptoms include delusions or, less commonly, hallucinations, usually unpleasant. Other symptoms of depression include poor concentration and memory (especially in those with melancholic or psychotic features), withdrawal from social situations and activities, reduced sex drive, and thoughts of death or suicide. Insomnia is common among the depressed. In the typical pattern, a person wakes very early and cannot get back to sleep. Insomnia affects at least 80% of depressed people. Hypersomnia, or oversleeping, can also happen. Some antidepressants may also cause insomnia due to their stimulating effect.
A depressed person may report multiple physical symptoms such as fatigue, headaches, or digestive problems; physical complaints are the most common presenting problem in developing countries, according to the World Health Organization's criteria for depression. Appetite often decreases, with resulting weight loss, although increased appetite and weight gain occasionally occur. Family and friends may notice that the person's behavior is either agitated or lethargic. Older depressed people may have cognitive symptoms of recent onset, such as forgetfulness, and a more noticeable slowing of movements. Depression often coexists with physical disorders common among the elderly, such as stroke, other cardiovascular diseases, Parkinson's disease, and chronic obstructive pulmonary disease.
Depressed children may often display an irritable mood rather than a depressed mood, and show varying symptoms depending on age and situation. Most lose interest in school and show a decline in academic performance. They may be described as clingy, demanding, dependent, or insecure. Diagnosis may be delayed or missed when symptoms are interpreted as normal moodiness. Depression may also coexist with attention-deficit hyperactivity disorder (ADHD), complicating the diagnosis and treatment of both.
Major depression frequently co-occurs with other psychiatric problems. The 1990–92 National Comorbidity Survey (US) reports that 51% of those with major depression also suffer from lifetime anxiety. Anxiety symptoms can have a major impact on the course of a depressive illness, with delayed recovery, increased risk of relapse, greater disability and increased suicide attempts. American neuroendocrinologist Robert Sapolsky similarly argues that the relationship between stress, anxiety, and depression could be measured and demonstrated biologically. There are increased rates of alcohol and drug abuse and particularly dependence, and around a third of individuals diagnosed with ADHD develop comorbid depression. Post-traumatic stress disorder and depression often co-occur.
Depression and pain often co-occur. One or more pain symptom is present in 65% of depressed patients, and anywhere from five to 85% of patients with pain will be suffering from depression, depending on the setting; there is a lower prevalence in general practice, and higher in specialty clinics. The diagnosis of depression is often delayed or missed, and the outcome worsens. The outcome can also worsen if the depression is noticed but completely misunderstood.
Depression is also associated with a 1.5- to 2-fold increased risk of cardiovascular disease, independent of other known risk factors, and is itself linked directly or indirectly to risk factors such as smoking and obesity. People with major depression are less likely to follow medical recommendations for treating cardiovascular disorders, which further increases their risk. In addition, cardiologists may not recognize underlying depression that complicates a cardiovascular problem under their care.
The biopsychosocial model proposes that biological, psychological, and social factors all play a role in causing depression. The diathesis–stress model specifies that depression results when a preexisting vulnerability, or diathesis, is activated by stressful life events. The preexisting vulnerability can be either genetic, implying an interaction between nature and nurture, or schematic, resulting from views of the world learned in childhood.
These interactive models have gained empirical support. For example, researchers in New Zealand took a prospective approach to studying depression, by documenting over time how depression emerged among an initially normal cohort of people. The researchers concluded that variation among the serotonin transporter (5-HTT) gene affects the chances that people who have dealt with very stressful life events will go on to experience depression. To be specific, depression may follow such events, but seems more likely to appear in people with one or two short alleles of the 5-HTT gene. In addition, a Swedish study estimated the heritability of depression—the degree to which individual differences in occurrence are associated with genetic differences—to be around 40% for women and 30% for men, and evolutionary psychologists have proposed that the genetic basis for depression lies deep in the history of naturally selected adaptations. A substance-induced mood disorder resembling major depression has been causally linked to long-term drug use or drug abuse, or to withdrawal from certain sedative and hypnotic drugs.
Most antidepressant medications increase the levels of one or more of the monoamines — the neurotransmitters serotonin, norepinephrine and dopamine — in the synaptic cleft between neurons in the brain. Some medications affect the monoamine receptors directly.
Serotonin is hypothesized to regulate other neurotransmitter systems; decreased serotonin activity may allow these systems to act in unusual and erratic ways. According to this "permissive hypothesis", depression arises when low serotonin levels promote low levels of norepinephrine, another monoamine neurotransmitter. Some antidepressants enhance the levels of norepinephrine directly, whereas others raise the levels of dopamine, a third monoamine neurotransmitter. These observations gave rise to the monoamine hypothesis of depression. In its contemporary formulation, the monoamine hypothesis postulates that a deficiency of certain neurotransmitters is responsible for the corresponding features of depression: "Norepinephrine may be related to alertness and energy as well as anxiety, attention, and interest in life; [lack of] serotonin to anxiety, obsessions, and compulsions; and dopamine to attention, motivation, pleasure, and reward, as well as interest in life." The proponents of this theory recommend the choice of an antidepressant with mechanism of action that impacts the most prominent symptoms. Anxious and irritable patients should be treated with SSRIs or norepinephrine reuptake inhibitors, and those experiencing a loss of energy and enjoyment of life with norepinephrine- and dopamine-enhancing drugs.
Besides the clinical observations that drugs that increase the amount of available monoamines are effective antidepressants, recent advances in psychiatric genetics indicate that phenotypic variation in central monoamine function may be marginally associated with vulnerability to depression. Despite these findings, the cause of depression is not simply monoamine deficiency. In the past two decades, research has revealed multiple limitations of the monoamine hypothesis, and its explanatory inadequacy has been highlighted within the psychiatric community. A counterargument is that the mood-enhancing effect of MAO inhibitors and SSRIs takes weeks of treatment to develop, even though the boost in available monoamines occurs within hours. Another counterargument is based on experiments with pharmacological agents that cause depletion of monoamines; while deliberate reduction in the concentration of centrally available monoamines may slightly lower the mood of unmedicated depressed patients, this reduction does not affect the mood of healthy people. The monoamine hypothesis, already limited, has been further oversimplified when presented to the general public as a mass marketing tool, usually phrased as a "chemical imbalance".
In 2003 a gene-environment interaction (GxE) was hypothesized to explain why life stress is a predictor for depressive episodes in some individuals, but not in others, depending on an allelic variation of the serotonin-transporter-linked promoter region (5-HTTLPR); a 2009 meta-analysis showed stressful life events were associated with depression, but found no evidence for an association with the 5-HTTLPR genotype. Another 2009 meta-analysis agreed with the latter finding. A 2010 review of studies in this area found a systematic relationship between the method used to assess environmental adversity and the results of the studies; this review also found that both 2009 meta-analyses were significantly biased toward negative studies, which used self-report measures of adversity.
MRI scans of patients with depression have revealed a number of differences in brain structure compared to those who are not depressed. Recent meta-analyses of neuroimaging studies in major depression, reported that compared to controls, depressed patients had increased volume of the lateral ventricles and adrenal gland and smaller volumes of the basal ganglia, thalamus, hippocampus, and frontal lobe (including the orbitofrontal cortex and gyrus rectus). Hyperintensities have been associated with patients with a late age of onset, and have led to the development of the theory of vascular depression.
There may be a link between depression and neurogenesis of the hippocampus, a center for both mood and memory. Loss of hippocampal neurons is found in some depressed individuals and correlates with impaired memory and dysthymic mood. Drugs may increase serotonin levels in the brain, stimulating neurogenesis and thus increasing the total mass of the hippocampus. This increase may help to restore mood and memory. Similar relationships have been observed between depression and an area of the anterior cingulate cortex implicated in the modulation of emotional behavior. One of the neurotrophins responsible for neurogenesis is brain-derived neurotrophic factor (BDNF). The level of BDNF in the blood plasma of depressed subjects is drastically reduced (more than threefold) as compared to the norm. Antidepressant treatment increases the blood level of BDNF. Although decreased plasma BDNF levels have been found in many other disorders, there is some evidence that BDNF is involved in the cause of depression and the mechanism of action of antidepressants.
There is some evidence that major depression may be caused in part by an overactive hypothalamic-pituitary-adrenal axis (HPA axis) that results in an effect similar to the neuro-endocrine response to stress. Investigations reveal increased levels of the hormone cortisol and enlarged pituitary and adrenal glands, suggesting disturbances of the endocrine system may play a role in some psychiatric disorders, including major depression. Oversecretion of corticotropin-releasing hormone from the hypothalamus is thought to drive this, and is implicated in the cognitive and arousal symptoms.
The hormone estrogen has been implicated in depressive disorders due to the increase in risk of depressive episodes after puberty, the antenatal period, and reduced rates after menopause. On the converse, the premenstrual and postpartum periods of low estrogen levels are also associated with increased risk. Sudden withdrawal of, fluctuations in or periods of sustained low levels of estrogen have been linked to significant mood lowering. Clinical recovery from depression postpartum, perimenopause, and postmenopause was shown to be effective after levels of estrogen were stabilized or restored.
Other research has explored potential roles of molecules necessary for overall cellular functioning: cytokines. The symptoms of major depressive disorder are nearly identical to those of sickness behavior, the response of the body when the immune system is fighting an infection. This raises the possibility that depression can result from a maladaptive manifestation of sickness behavior as a result of abnormalities in circulating cytokines. The involvement of pro-inflammatory cytokines in depression is strongly suggested by a meta-analysis of the clinical literature showing higher blood concentrations of IL-6 and TNF-α in depressed subjects compared to controls. These immunological abnormalities may cause excessive prostaglandin E₂ production and likely excessive COX-2 expression. Abnormalities in how indoleamine 2,3-dioxygenase enzyme activates as well as the metabolism of tryptophan-kynurenine may lead to excessive metabolism of tryptophan-kynurenine and lead to increased production of the neurotoxin quinolinic acid, contributing to major depression. NMDA activation leading to excessive glutamatergic neurotransmission, may also contribute.
Finally, some relationships have been reported between specific subtypes of depression and climatic conditions. Thus, the incidence of psychotic depression has been found to increase when the barometric pressure is low, while the incidence of melancholic depression has been found to increase when the temperature and/or sunlight are low.
Inflammatory processes can be triggered by negative cognitions or their consequences, such as stress, violence, or deprivation. Thus, negative cognitions can cause inflammation that can, in turn, lead to depression.
Various aspects of personality and its development appear to be integral to the occurrence and persistence of depression, with negative emotionality as a common precursor. Although depressive episodes are strongly correlated with adverse events, a person's characteristic style of coping may be correlated with his or her resilience. In addition, low self-esteem and self-defeating or distorted thinking are related to depression. Depression is less likely to occur, as well as quicker to remit, among those who are religious. It is not always clear which factors are causes and which are effects of depression; however, depressed persons who are able to reflect upon and challenge their thinking patterns often show improved mood and self-esteem.
American psychiatrist Aaron T. Beck, following on from the earlier work of George Kelly and Albert Ellis, developed what is now known as a cognitive model of depression in the early 1960s. He proposed that three concepts underlie depression: a triad of negative thoughts composed of cognitive errors about oneself, one's world, and one's future; recurrent patterns of depressive thinking, or schemas; and distorted information processing. From these principles, he developed the structured technique of cognitive behavioral therapy (CBT). According to American psychologist Martin Seligman, depression in humans is similar to learned helplessness in laboratory animals, who remain in unpleasant situations when they are able to escape, but do not because they initially learned they had no control.
Attachment theory, which was developed by English psychiatrist John Bowlby in the 1960s, predicts a relationship between depressive disorder in adulthood and the quality of the earlier bond between the infant and the adult caregiver. In particular, it is thought that "the experiences of early loss, separation and rejection by the parent or caregiver (conveying the message that the child is unlovable) may all lead to insecure internal working models ... Internal cognitive representations of the self as unlovable and of attachment figures as unloving [or] untrustworthy would be consistent with parts of Beck's cognitive triad". While a wide variety of studies has upheld the basic tenets of attachment theory, research has been inconclusive as to whether self-reported early attachment and later depression are demonstrably related.
Depressed individuals often blame themselves for negative events, and, as shown in a 1993 study of hospitalized adolescents with self-reported depression, those who blame themselves for negative occurrences may not take credit for positive outcomes. This tendency is characteristic of a depressive attributional, or pessimistic explanatory style. According to Albert Bandura, a Canadian social psychologist associated with social cognitive theory, depressed individuals have negative beliefs about themselves, based on experiences of failure, observing the failure of social models, a lack of social persuasion that they can succeed, and their own somatic and emotional states including tension and stress. These influences may result in a negative self-concept and a lack of self-efficacy; that is, they do not believe they can influence events or achieve personal goals.
An examination of depression in women indicates that vulnerability factors—such as early maternal loss, lack of a confiding relationship, responsibility for the care of several young children at home, and unemployment—can interact with life stressors to increase the risk of depression. For older adults, the factors are often health problems, changes in relationships with a spouse or adult children due to the transition to a care-giving or care-needing role, the death of a significant other, or a change in the availability or quality of social relationships with older friends because of their own health-related life changes.
The understanding of depression has also received contributions from the psychoanalytic and humanistic branches of psychology. From the classical psychoanalytic perspective of Austrian psychiatrist Sigmund Freud, depression, or melancholia, may be related to interpersonal loss and early life experiences. Existential therapists have connected depression to the lack of both meaning in the present and a vision of the future. The founder of humanistic psychology, American psychologist Abraham Maslow, suggested that depression could arise when people are unable to attain their needs or to self-actualize (to realize their full potential).
Poverty and social isolation are associated with increased risk of mental health problems in general. Child abuse (physical, emotional, sexual, or neglect) is also associated with increased risk of developing depressive disorders later in life. Such a link has good face validity given that it is during the years of development that a child is learning how to become a social being. Abuse of the child by the caregiver is bound to distort the developing personality and create a much greater risk for depression and many other debilitating mental and emotional states. Disturbances in family functioning, such as parental (particularly maternal) depression, severe marital conflict or divorce, death of a parent, or other disturbances in parenting are additional risk factors. In adulthood, stressful life events are strongly associated with the onset of major depressive episodes. In this context, life events connected to social rejection appear to be particularly related to depression. Evidence that a first episode of depression is more likely to be immediately preceded by stressful life events than are recurrent ones is consistent with the hypothesis that people may become increasingly sensitized to life stress over successive recurrences of depression.
The relationship between stressful life events and social support has been a matter of some debate; the lack of social support may increase the likelihood that life stress will lead to depression, or the absence of social support may constitute a form of strain that leads to depression directly. There is evidence that neighborhood social disorder, for example, due to crime or illicit drugs, is a risk factor, and that a high neighborhood socioeconomic status, with better amenities, is a protective factor. Adverse conditions at work, particularly demanding jobs with little scope for decision-making, are associated with depression, although diversity and confounding factors make it difficult to confirm that the relationship is causal.
Depression can be caused by prejudice. This can occur when people hold negative self-stereotypes about themselves. This "deprejudice" can be related to a group membership (e.g., Me-Gay-Bad) or not (Me-Bad). If someone has prejudicial beliefs about a stigmatized group and then becomes a member of that group, they may internalize their prejudice and develop depression. For example, a boy growing up in the United States may learn the negative stereotype that gay men are immoral. When he grows up and realizes he is gay, he may direct this prejudice inward on himself and become depressed. People may also show prejudice internalization through self-stereotyping because of negative childhood experiences such as verbal and physical abuse.
From the standpoint of evolutionary theory, major depression is hypothesized, in some instances, to increase an individual's reproductive fitness. Evolutionary approaches to depression and evolutionary psychology posit specific mechanisms by which depression may have been genetically incorporated into the human gene pool, accounting for the high heritability and prevalence of depression by proposing that certain components of depression are adaptations, such as the behaviors relating to attachment and social rank. Current behaviors can be explained as adaptations to regulate relationships or resources, although the result may be maladaptive in modern environments.
From another viewpoint, a counseling therapist may see depression not as a biochemical illness or disorder but as "a species-wide evolved suite of emotional programs that are mostly activated by a perception, almost always over-negative, of a major decline in personal usefulness, that can sometimes be linked to guilt, shame or perceived rejection". This suite may have manifested in aging hunters in humans' foraging past, who were marginalized by their declining skills, and may continue to appear in alienated members of today's society. The feelings of uselessness generated by such marginalization could in theory prompt support from friends and kin. In addition, in a manner analogous to that in which physical pain has evolved to hinder actions that may cause further injury, "psychic misery" may have evolved to prevent hasty and maladaptive reactions to distressing situations.
Drug and alcohol use
Very high levels of substance abuse occur in the psychiatric population, especially alcohol, sedatives and cannabis. Depression and other mental health problems can have a substance induced cause; making a differential or dual diagnosis regarding whether mental ill-health is substance related or not or co-occurring is an important part of a psychiatric evaluation. According to the DSM-IV, a diagnosis of mood disorder cannot be made if the cause is believed to be due to "the direct physiological effects of a substance"; when a syndrome resembling major depression is believed to be caused immediately by substance abuse or by an adverse drug reaction, it is referred to as, "substance-induced mood disturbance". Alcoholism or excessive alcohol consumption significantly increases the risk of developing major depression. Like alcohol, the benzodiazepines are central nervous system depressants; this class of medication is commonly used to treat insomnia, anxiety, and muscular spasms. Similar to alcohol, benzodiazepines increase the risk of developing major depression. This increased risk of depression may be due in part to the adverse or toxic effects of sedative-hypnotic drugs including alcohol on neurochemistry, such as decreased levels of serotonin and norepinephrine, or activation of immune mediated inflammatory pathways in the brain. Chronic use of benzodiazepines also can cause or worsen depression, or depression may be part of a protracted withdrawal syndrome. About a quarter of people recovering from alcoholism experience anxiety and depression which can persist for up to 2 years. Methamphetamine abuse is also commonly associated with depression.
A diagnostic assessment may be conducted by a suitably trained general practitioner, or by a psychiatrist or psychologist, who records the person's current circumstances, biographical history, current symptoms and family history. The broad clinical aim is to formulate the relevant biological, psychological and social factors that may be impacting on the individual's mood. The assessor may also discuss the person's current ways of regulating their mood (healthy or otherwise) such as alcohol and drug use. The assessment also includes a mental state examination, which is an assessment of the person's current mood and thought content, in particular the presence of themes of hopelessness or pessimism, self-harm or suicide, and an absence of positive thoughts or plans. Specialist mental health services are rare in rural areas, and thus diagnosis and management is left largely to primary-care clinicians. This issue is even more marked in developing countries. The score on a rating scale alone is insufficient to diagnose depression to the satisfaction of the DSM or ICD, but it provides an indication of the severity of symptoms for a time period, so a person who scores above a given cut-off point can be more thoroughly evaluated for a depressive disorder diagnosis. Several rating scales are used for this purpose. Screening programs have been advocated to improve detection of depression, but there is evidence that they do not improve detection rates, treatment, or outcome.
Primary care physicians and other non-psychiatrist physicians have difficulty diagnosing depression, in part because they are trained to recognize and treat physical symptoms, and depression can cause a myriad of physical (psychosomatic) symptoms. Non-psychiatrists miss two-thirds of cases and unnecessarily treat other patients.
Before diagnosing a major depressive disorder, in general a doctor performs a medical examination and selected investigations to rule out other causes of symptoms. These include blood tests measuring TSH and thyroxine to exclude hypothyroidism; basic electrolytes and serum calcium to rule out a metabolic disturbance; and a full blood count including ESR to rule out a systemic infection or chronic disease. Adverse affective reactions to medications or alcohol misuse are often ruled out, as well. Testosterone levels may be evaluated to diagnose hypogonadism, a cause of depression in men.
Subjective cognitive complaints appear in older depressed people, but they can also be indicative of the onset of a dementing disorder, such as Alzheimer's disease. Cognitive testing and brain imaging can help distinguish depression from dementia. A CT scan can exclude brain pathology in those with psychotic, rapid-onset or otherwise unusual symptoms. No biological tests confirm major depression. In general, investigations are not repeated for a subsequent episode unless there is a medical indication.
Biomarkers of depression have been sought to provide an objective method of diagnosis. There are several potential biomarkers, including Brain-Derived Neurotrophic Factor and various functional MRI techniques. One study developed a decision tree model of interpreting a series of fMRI scans taken during various activities. In their subjects, the authors of that study were able to achieve a sensitivity of 80% and a sensitivity of 87%, corresponding to a negative predictive value of 98% and a positive predictive value of 32% (positive and negative likelihood ratios were 6.15, 0.23, respectively). However, much more research is needed before these tests could be used clinically.
DSM-IV-TR and ICD-10 criteria
The most widely used criteria for diagnosing depressive conditions are found in the American Psychiatric Association's revised fourth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR), and the World Health Organization's International Statistical Classification of Diseases and Related Health Problems (ICD-10), which uses the name depressive episode for a single episode and recurrent depressive disorder for repeated episodes. The latter system is typically used in European countries, while the former is used in the US and many other non-European nations, and the authors of both have worked towards conforming one with the other.
Both DSM-IV-TR and ICD-10 mark out typical (main) depressive symptoms. ICD-10 defines three typical depressive symptoms (depressed mood, anhedonia, and reduced energy), two of which should be present to determine depressive disorder diagnosis. According to DSM-IV-TR, there are two main depressive symptoms—depressed mood and anhedonia. At least one of these must be present to make a diagnosis of major depressive episode.
Major depressive disorder is classified as a mood disorder in DSM-IV-TR. The diagnosis hinges on the presence of single or recurrent major depressive episodes. Further qualifiers are used to classify both the episode itself and the course of the disorder. The category Depressive Disorder Not Otherwise Specified is diagnosed if the depressive episode's manifestation does not meet the criteria for a major depressive episode. The ICD-10 system does not use the term major depressive disorder, but lists very similar criteria for the diagnosis of a depressive episode (mild, moderate or severe); the term recurrent may be added if there have been multiple episodes without mania.
Major depressive episode
A major depressive episode is characterized by the presence of a severely depressed mood that persists for at least two weeks. Episodes may be isolated or recurrent and are categorized as mild (few symptoms in excess of minimum criteria), moderate, or severe (marked impact on social or occupational functioning). An episode with psychotic features — commonly referred to as psychotic depression — is automatically rated as severe. If the patient has had an episode of mania or markedly elevated mood, a diagnosis of bipolar disorder is made instead. Depression without mania is sometimes referred to as unipolar because the mood remains at one emotional state or "pole".
DSM-IV-TR excludes cases where the symptoms are a result of bereavement, although it is possible for normal bereavement to evolve into a depressive episode if the mood persists and the characteristic features of a major depressive episode develop. The criteria have been criticized because they do not take into account any other aspects of the personal and social context in which depression can occur. In addition, some studies have found little empirical support for the DSM-IV cut-off criteria, indicating they are a diagnostic convention imposed on a continuum of depressive symptoms of varying severity and duration: Excluded are a range of related diagnoses, including dysthymia, which involves a chronic but milder mood disturbance; recurrent brief depression, consisting of briefer depressive episodes; minor depressive disorder, whereby only some of the symptoms of major depression are present; and adjustment disorder with depressed mood, which denotes low mood resulting from a psychological response to an identifiable event or stressor.
The DSM-IV-TR recognizes five further subtypes of MDD, called specifiers, in addition to noting the length, severity and presence of psychotic features:
- Melancholic depression is characterized by a loss of pleasure in most or all activities, a failure of reactivity to pleasurable stimuli, a quality of depressed mood more pronounced than that of grief or loss, a worsening of symptoms in the morning hours, early-morning waking, psychomotor retardation, excessive weight loss (not to be confused with anorexia nervosa), or excessive guilt.
- Atypical depression is characterized by mood reactivity (paradoxical anhedonia) and positivity, significant weight gain or increased appetite (comfort eating), excessive sleep or sleepiness (hypersomnia), a sensation of heaviness in limbs known as leaden paralysis, and significant social impairment as a consequence of hypersensitivity to perceived interpersonal rejection.
- Catatonic depression is a rare and severe form of major depression involving disturbances of motor behavior and other symptoms. Here the person is mute and almost stuporous, and either remains immobile or exhibits purposeless or even bizarre movements. Catatonic symptoms also occur in schizophrenia or in manic episodes, or may be caused by neuroleptic malignant syndrome.
- Postpartum depression, or mental and behavioral disorders associated with the puerperium, not elsewhere classified, refers to the intense, sustained and sometimes disabling depression experienced by women after giving birth. Postpartum depression has an incidence rate of 10–15% among new mothers. The DSM-IV mandates that, in order to qualify as postpartum depression, onset occur within one month of delivery. It has been said that postpartum depression can last as long as three months.
- Seasonal affective disorder (SAD) is a form of depression in which depressive episodes come on in the autumn or winter, and resolve in spring. The diagnosis is made if at least two episodes have occurred in colder months with none at other times, over a two-year period or longer.
To confer major depressive disorder as the most likely diagnosis, other potential diagnoses must be considered, including dysthymia, adjustment disorder with depressed mood or bipolar disorder. Dysthymia is a chronic, milder mood disturbance in which a person reports a low mood almost daily over a span of at least two years. The symptoms are not as severe as those for major depression, although people with dysthymia are vulnerable to secondary episodes of major depression (sometimes referred to as double depression). Adjustment disorder with depressed mood is a mood disturbance appearing as a psychological response to an identifiable event or stressor, in which the resulting emotional or behavioral symptoms are significant but do not meet the criteria for a major depressive episode. Bipolar disorder, also known as manic–depressive disorder, is a condition in which depressive phases alternate with periods of mania or hypomania. Although depression is currently categorized as a separate disorder, there is ongoing debate because individuals diagnosed with major depression often experience some hypomanic symptoms, indicating a mood disorder continuum.
Other disorders need to be ruled out before diagnosing major depressive disorder. They include depressions due to physical illness, medications, and substance abuse. Depression due to physical illness is diagnosed as a mood disorder due to a general medical condition. This condition is determined based on history, laboratory findings, or physical examination. When the depression is caused by a substance abused including a drug of abuse, a medication, or exposure to a toxin, it is then diagnosed as a substance-induced mood disorder. In such cases, a substance is judged to be etiologically related to the mood disturbance.
Behavioral interventions, such as interpersonal therapy and cognitive-behavioral therapy, are effective at preventing new onset depression. Because such interventions appear to be most effective when delivered to individuals or small groups, it has been suggested that they may be able to reach their large target audience most efficiently through the Internet. However, an earlier meta-analysis found preventive programs with a competence-enhancing component to be superior to behaviorally oriented programs overall, and found behavioral programs to be particularly unhelpful for older people, for whom social support programs were uniquely beneficial. In addition, the programs that best prevented depression comprised more than eight sessions, each lasting between 60 and 90 minutes, were provided by a combination of lay and professional workers, had a high-quality research design, reported attrition rates, and had a well-defined intervention. The Netherlands mental health care system provides preventive interventions, such as the "Coping with Depression" course (CWD) for people with sub-threshold depression. The course is claimed to be the most successful of psychoeducational interventions for the treatment and prevention of depression (both for its adaptability to various populations and its results), with a risk reduction of 38% in major depression and an efficacy as a treatment comparing favorably to other psychotherapies. Preventative efforts may result in a decreases in rates of the condition of between 22 and 38%.
The three most common treatments for depression are psychotherapy, medication, and electroconvulsive therapy. Psychotherapy is the treatment of choice for people under 18, while electroconvulsive therapy is used only as a last resort. Care is usually given on an outpatient basis, whereas treatment in an inpatient unit is considered if there is a significant risk to self or others.
Treatment options are much more limited in developing countries, where access to mental health staff, medication, and psychotherapy is often difficult. Development of mental health services is minimal in many countries; depression is viewed as a phenomenon of the developed world despite evidence to the contrary, and not as an inherently life-threatening condition. Physical exercise is recommended for management of mild depression, but it has only a moderate, statistically insignificant effect on symptoms in most cases of major depressive disorder.
Psychotherapy can be delivered, to individuals, groups, or families by mental health professionals, including psychotherapists, psychiatrists, psychologists, clinical social workers, counselors, and suitably trained psychiatric nurses. With more complex and chronic forms of depression, a combination of medication and psychotherapy may be used.
Cognitive behavioral therapy (CBT) currently has the most research evidence for the treatment of depression in children and adolescents, and CBT and interpersonal psychotherapy (IPT) are preferred therapies for adolescent depression. In people under 18, according to the National Institute for Health and Clinical Excellence, medication should be offered only in conjunction with a psychological therapy, such as CBT, interpersonal therapy, or family therapy.
Psychotherapy has been shown to be effective in older people. Successful psychotherapy appears to reduce the recurrence of depression even after it has been terminated or replaced by occasional booster sessions.
The most-studied form of psychotherapy for depression is CBT, which teaches clients to challenge self-defeating, but enduring ways of thinking (cognitions) and change counter-productive behaviors. Research beginning in the mid-1990s suggested that CBT could perform as well or better than antidepressants in patients with moderate to severe depression. CBT may be effective in depressed adolescents, although its effects on severe episodes are not definitively known. Several variables predict success for cognitive behavioral therapy in adolescents: higher levels of rational thoughts, less hopelessness, fewer negative thoughts, and fewer cognitive distortions. CBT is particularly beneficial in preventing relapse. Several variants of cognitive behavior therapy have been used in depressed patients, the most notable being rational emotive behavior therapy, and more recently mindfulness-based cognitive therapy.
Psychoanalysis is a school of thought, founded by Sigmund Freud, which emphasizes the resolution of unconscious mental conflicts. Psychoanalytic techniques are used by some practitioners to treat clients presenting with major depression. A more widely practiced, eclectic technique, called psychodynamic psychotherapy, is loosely based on psychoanalysis and has an additional social and interpersonal focus. In a meta-analysis of three controlled trials of Short Psychodynamic Supportive Psychotherapy, this modification was found to be as effective as medication for mild to moderate depression.
Logotherapy, a form of existential psychotherapy developed by Austrian psychiatrist Viktor Frankl, addresses the filling of an "existential vacuum" associated with feelings of futility and meaninglessness. It is posited that this type of psychotherapy may be useful for depression in older adolescents.
The effectiveness of antidepressants is none to minimal in those with mild or moderate depression but significant in those with very severe disease. The effects of antidepressants are somewhat superior to those of psychotherapy, especially in cases of chronic major depression, although in short-term trials more patients — especially those with less serious forms of depression — cease medication than cease psychotherapy, most likely due to adverse effects from the medication and to patients' preferences for psychological therapies over pharmacological treatments.
To find the most effective antidepressant medication with minimal side-effects, the dosages can be adjusted, and if necessary, combinations of different classes of antidepressants can be tried. Response rates to the first antidepressant administered range from 50–75%, and it can take at least six to eight weeks from the start of medication to remission, when the patient is back to their normal self. Antidepressant medication treatment is usually continued for 16 to 20 weeks after remission, to minimize the chance of recurrence, and even up to one year of continuation is recommended. People with chronic depression may need to take medication indefinitely to avoid relapse.
Selective serotonin reuptake inhibitors (SSRIs) are the primary medications prescribed owing to their relatively mild side-effects, and because they are less toxic in overdose than other antidepressants. Patients who do not respond to one SSRI can be switched to another antidepressant, and this results in improvement in almost 50% of cases. Another option is to switch to the atypical antidepressant bupropion. Venlafaxine, an antidepressant with a different mechanism of action, may be modestly more effective than SSRIs. However, venlafaxine is not recommended in the UK as a first-line treatment because of evidence suggesting its risks may outweigh benefits, and it is specifically discouraged in children and adolescents. For adolescent depression, fluoxetine and escitalopram are the two recommended choices. Antidepressants have not been found to be beneficial in children. There is also insufficient evidence to determine effectiveness in those with depression complicated by dementia. Any antidepressant can cause low serum sodium levels (also called hyponatremia); nevertheless, it has been reported more often with SSRIs. It is not uncommon for SSRIs to cause or worsen insomnia; the sedating antidepressant mirtazapine can be used in such cases.
Irreversible monoamine oxidase inhibitors, an older class of antidepressants, have been plagued by potentially life-threatening dietary and drug interactions. They are still used only rarely, although newer and better tolerated agents of this class have been developed. The safety profile is different with reversible monoamine oxidase inhibitors such as moclobemide where the risk of serious dietary interactions is negligible and dietary restrictions are less strict.
The terms "refractory depression" and "treatment-resistant depression" are used to describe cases that do not respond to adequate courses of at least two antidepressants. In many major studies, only about 35% of patients respond well to medical treatment. It may be difficult for a doctor to decide when someone has treatment-resistant depression or whether the problem is due to coexisting disorders, which are common among patients with major depression.
A team of psychologists from multiple American universities found that antidepressant drugs hardly have better effects than a placebo in cases of mild or moderate depression. The study focused on paroxetine and imipramine.
For children, adolescents, and probably young adults between 18 and 24 years old, there is a higher risk of both suicidal ideations and suicidal behavior in those treated with SSRIs. For adults, it is unclear whether or not SSRIs affect the risk of suicidality. One review found no connection; another an increased risk; and a third no risk in those 25–65 years old and a decrease risk in those more than 65. Epidemiological data has found that the widespread use of antidepressants in the new "SSRI-era" is associated with a significant decline in suicide rates in most countries with traditionally high baseline suicide rates. The causality of the relationship is inconclusive. A black box warning was introduced in the United States in 2007 on SSRI and other antidepressant medications due to increased risk of suicide in patients younger than 24 years old. Similar precautionary notice revisions were implemented by the Japanese Ministry of Health.
There is some evidence that fish oil supplements containing high levels of eicosapentaenoic acid to docosahexaenoic acid may be effective in major depression, but other meta-analysis of the research conclude that positive effects may be due to publication bias. There is some preliminary evidence that COX-2 inhibitors have a beneficial effect on major depression.
Electroconvulsive therapy (ECT) is a procedure whereby pulses of electricity are sent through the brain via two electrodes, usually one on each temple, to induce a seizure while the person is under a brief period of general anesthesia. Hospital psychiatrists may recommend ECT for cases of severe major depression that have not responded to antidepressant medication or, less often, psychotherapy or supportive interventions. ECT can have a quicker effect than antidepressant therapy and thus may be the treatment of choice in emergencies such as catatonic depression where the person has stopped eating and drinking, or where a person is severely suicidal. ECT is probably more effective than pharmacotherapy for depression in the immediate short-term, although a landmark community-based study found much lower remission rates in routine practice. When ECT is used on its own, the relapse rate within the first six months is very high; early studies put the rate at around 50%, while a more recent controlled trial found rates of 84% even with placebos. The early relapse rate may be reduced by the use of psychiatric medications or further ECT (although the latter is not recommended by some authorities) but remains high. Common initial adverse effects from ECT include short and long-term memory loss, disorientation and headache. Although memory disturbance after ECT usually resolves within one month, ECT remains a controversial treatment, and debate on its efficacy and safety continues.
Major depressive episodes often resolve over time whether or not they are treated. Outpatients on a waiting list show a 10–15% reduction in symptoms within a few months, with approximately 20% no longer meeting the full criteria for a depressive disorder. The median duration of an episode has been estimated to be 23 weeks, with the highest rate of recovery in the first three months.
Studies have shown that 80% of those suffering from their first major depressive episode will suffer from at least 1 more during their life, with a lifetime average of 4 episodes. Other general population studies indicate around half those who have an episode (whether treated or not) recover and remain well, while the other half will have at least one more, and around 15% of those experience chronic recurrence. Studies recruiting from selective inpatient sources suggest lower recovery and higher chronicity, while studies of mostly outpatients show that nearly all recover, with a median episode duration of 11 months. Around 90% of those with severe or psychotic depression, most of whom also meet criteria for other mental disorders, experience recurrence.
Recurrence is more likely if symptoms have not fully resolved with treatment. Current guidelines recommend continuing antidepressants for four to six months after remission to prevent relapse. Evidence from many randomized controlled trials indicates continuing antidepressant medications after recovery can reduce the chance of relapse by 70% (41% on placebo vs. 18% on antidepressant). The preventive effect probably lasts for at least the first 36 months of use.
Those people who experience repeated episodes of depression require ongoing treatment in order to prevent more severe, long-term depression. In some cases, people need to take medications for long periods of time or for the rest of their lives.
Cases when outcome is poor are associated with inappropriate treatment, severe initial symptoms that may include psychosis, early age of onset, more previous episodes, incomplete recovery after 1 year, pre-existing severe mental or medical disorder, and family dysfunction as well.
Depressed individuals have a shorter life expectancy than those without depression, in part because depressed patients are at risk of dying by suicide. However, they also have a higher rate of dying from other causes, being more susceptible to medical conditions such as heart disease. Up to 60% of people who commit suicide have a mood disorder such as major depression, and the risk is especially high if a person has a marked sense of hopelessness or has both depression and borderline personality disorder. The lifetime risk of suicide associated with a diagnosis of major depression in the US is estimated at 3.4%, which averages two highly disparate figures of almost 7% for men and 1% for women (although suicide attempts are more frequent in women). The estimate is substantially lower than a previously accepted figure of 15%, which had been derived from older studies of hospitalized patients.
Depression is often associated with unemployment and poverty. Major depression is currently the leading cause of disease burden in North America and other high-income countries, and the fourth-leading cause worldwide. In the year 2030, it is predicted to be the second-leading cause of disease burden worldwide after HIV, according to the World Health Organization. Delay or failure in seeking treatment after relapse, and the failure of health professionals to provide treatment, are two barriers to reducing disability.
Depression is a major cause of morbidity worldwide. It is believed to currently affect approximately 298 million people as of 2010 (4.3% of the global population). Lifetime prevalence varies widely, from 3% in Japan to 17% in the US. In most countries the number of people who have depression during their lives falls within an 8–12% range. In North America the probability of having a major depressive episode within a year-long period is 3–5% for males and 8–10% for females. Population studies have consistently shown major depression to be about twice as common in women as in men, although it is unclear why this is so, and whether factors unaccounted for are contributing to this. The relative increase in occurrence is related to pubertal development rather than chronological age, reaches adult ratios between the ages of 15 and 18, and appears associated with psychosocial more than hormonal factors.
People are most likely to suffer their first depressive episode between the ages of 30 and 40, and there is a second, smaller peak of incidence between ages 50 and 60. The risk of major depression is increased with neurological conditions such as stroke, Parkinson's disease, or multiple sclerosis and during the first year after childbirth. It is also more common after cardiovascular illnesses, and is related more to a poor outcome than to a better one. Studies conflict on the prevalence of depression in the elderly, but most data suggest there is a reduction in this age group. Depressive disorders are most common to observe in urban than in rural population and the prevalence is in groups with stronger socioeconomic factors i.e. homelessness
The Ancient Greek physician Hippocrates described a syndrome of melancholia as a distinct disease with particular mental and physical symptoms; he characterized all "fears and despondencies, if they last a long time" as being symptomatic of the ailment. It was a similar but far broader concept than today's depression; prominence was given to a clustering of the symptoms of sadness, dejection, and despondency, and often fear, anger, delusions and obsessions were included.
The term depression itself was derived from the Latin verb deprimere, "to press down". From the 14th century, "to depress" meant to subjugate or to bring down in spirits. It was used in 1665 in English author Richard Baker's Chronicle to refer to someone having "a great depression of spirit", and by English author Samuel Johnson in a similar sense in 1753. The term also came into use in physiology and economics. An early usage referring to a psychiatric symptom was by French psychiatrist Louis Delasiauve in 1856, and by the 1860s it was appearing in medical dictionaries to refer to a physiological and metaphorical lowering of emotional function. Since Aristotle, melancholia had been associated with men of learning and intellectual brilliance, a hazard of contemplation and creativity. The newer concept abandoned these associations and through the 19th century, became more associated with women.
Although melancholia remained the dominant diagnostic term, depression gained increasing currency in medical treatises and was a synonym by the end of the century; German psychiatrist Emil Kraepelin may have been the first to use it as the overarching term, referring to different kinds of melancholia as depressive states.
Sigmund Freud likened the state of melancholia to mourning in his 1917 paper Mourning and Melancholia. He theorized that objective loss, such as the loss of a valued relationship through death or a romantic break-up, results in subjective loss as well; the depressed individual has identified with the object of affection through an unconscious, narcissistic process called the libidinal cathexis of the ego. Such loss results in severe melancholic symptoms more profound than mourning; not only is the outside world viewed negatively but the ego itself is compromised. The patient's decline of self-perception is revealed in his belief of his own blame, inferiority, and unworthiness. He also emphasized early life experiences as a predisposing factor. Meyer put forward a mixed social and biological framework emphasizing reactions in the context of an individual's life, and argued that the term depression should be used instead of melancholia. The first version of the DSM (DSM-I, 1952) contained depressive reaction and the DSM-II (1968) depressive neurosis, defined as an excessive reaction to internal conflict or an identifiable event, and also included a depressive type of manic-depressive psychosis within Major affective disorders.
In the mid-20th century, researchers theorized that depression was caused by a chemical imbalance in neurotransmitters in the brain, a theory based on observations made in the 1950s of the effects of reserpine and isoniazid in altering monoamine neurotransmitter levels and affecting depressive symptoms.
The term Major depressive disorder was introduced by a group of US clinicians in the mid-1970s as part of proposals for diagnostic criteria based on patterns of symptoms (called the "Research Diagnostic Criteria", building on earlier Feighner Criteria), and was incorporated into the DSM-III in 1980. To maintain consistency the ICD-10 used the same criteria, with only minor alterations, but using the DSM diagnostic threshold to mark a mild depressive episode, adding higher threshold categories for moderate and severe episodes. The ancient idea of melancholia still survives in the notion of a melancholic subtype.
The new definitions of depression were widely accepted, albeit with some conflicting findings and views. There have been some continued empirically based arguments for a return to the diagnosis of melancholia. There has been some criticism of the expansion of coverage of the diagnosis, related to the development and promotion of antidepressants and the biological model since the late 1950s.
Society and culture
People's conceptualizations of depression vary widely, both within and among cultures. "Because of the lack of scientific certainty," one commentator has observed, "the debate over depression turns on questions of language. What we call it—'disease,' 'disorder,' 'state of mind'—affects how we view, diagnose, and treat it." There are cultural differences in the extent to which serious depression is considered an illness requiring personal professional treatment, or is an indicator of something else, such as the need to address social or moral problems, the result of biological imbalances, or a reflection of individual differences in the understanding of distress that may reinforce feelings of powerlessness, and emotional struggle.
The diagnosis is less common in some countries, such as China. It has been argued that the Chinese traditionally deny or somatize emotional depression (although since the early 1980s, the Chinese denial of depression may have modified drastically). Alternatively, it may be that Western cultures reframe and elevate some expressions of human distress to disorder status. Australian professor Gordon Parker and others have argued that the Western concept of depression "medicalizes" sadness or misery. Similarly, Hungarian-American psychiatrist Thomas Szasz and others argue that depression is a metaphorical illness that is inappropriately regarded as an actual disease. There has also been concern that the DSM, as well as the field of descriptive psychiatry that employs it, tends to reify abstract phenomena such as depression, which may in fact be social constructs. American archetypal psychologist James Hillman writes that depression can be healthy for the soul, insofar as "it brings refuge, limitation, focus, gravity, weight, and humble powerlessness." Hillman argues that therapeutic attempts to eliminate depression echo the Christian theme of resurrection, but have the unfortunate effect of demonizing a soulful state of being.
Historical figures were often reluctant to discuss or seek treatment for depression due to social stigma about the condition, or due to ignorance of diagnosis or treatments. Nevertheless, analysis or interpretation of letters, journals, artwork, writings or statements of family and friends of some historical personalities has led to the presumption that they may have had some form of depression. People who may have had depression include English author Mary Shelley, American-British writer Henry James, and American president Abraham Lincoln. Some well-known contemporary people with possible depression include Canadian songwriter Leonard Cohen and American playwright and novelist Tennessee Williams. Some pioneering psychologists, such as Americans William James and John B. Watson, dealt with their own depression.
There has been a continuing discussion of whether neurological disorders and mood disorders may be linked to creativity, a discussion that goes back to Aristotelian times. British literature gives many examples of reflections on depression. English philosopher John Stuart Mill experienced a several-months-long period of what he called "a dull state of nerves", when one is "unsusceptible to enjoyment or pleasurable excitement; one of those moods when what is pleasure at other times, becomes insipid or indifferent". He quoted English poet Samuel Taylor Coleridge's "Dejection" as a perfect description of his case: "A grief without a pang, void, dark and drear, / A drowsy, stifled, unimpassioned grief, / Which finds no natural outlet or relief / In word, or sigh, or tear." English writer Samuel Johnson used the term "the black dog" in the 1780s to describe his own depression, and it was subsequently popularized by depression sufferer former British Prime Minister Sir Winston Churchill.
Social stigma of major depression is widespread, and contact with mental health services reduces this only slightly. Public opinions on treatment differ markedly to those of health professionals; alternative treatments are held to be more helpful than pharmacological ones, which are viewed poorly. In the UK, the Royal College of Psychiatrists and the Royal College of General Practitioners conducted a joint Five-year Defeat Depression campaign to educate and reduce stigma from 1992 to 1996; a MORI study conducted afterwards showed a small positive change in public attitudes to depression and treatment.
- Barlow 2005, pp. 248–49
- "Major Depressive Disorder". American Medical Network, Inc. Retrieved 2011-01-15.
- Kirsch I, Deacon BJ, Huedo-Medina TB, Scoboria A, Moore TJ, Johnson BT (February 2008). "Initial severity and antidepressant benefits: a meta-analysis of data submitted to the Food and Drug Administration". PLoS Med. 5 (2): e45. doi:10.1371/journal.pmed.0050045. PMC 2253608. PMID 18303940.
- Depression [PDF]. National Institute of Mental Health (NIMH); [cited 2008-09-07].
- Hays RD, Wells KB, Sherbourne CD. Functioning and well-being outcomes of patients with depression compared with chronic general medical illnesses. Archives of General Psychiatry. 1995;52(1):11–19. PMID 7811158.
- American Psychiatric Association 2000a, p. 349
- American Psychiatric Association 2000a, p. 412
- Delgado PL and Schillerstrom J. Cognitive Difficulties Associated With Depression: What Are the Implications for Treatment?. Psychiatric Times. 2009;26(3).
- American Psychiatric Association 2000a, p. 350
- "Bedfellows:Insomnia and Depression". Retrieved 2010-07-02.
- Insomnia: Assessment and Management in Primary Care,
- Patel V, Abas M, Broadhead J. (fulltext) Depression in developing countries: Lessons from Zimbabwe. BMJ. 2001 [cited 2008-10-05];322(7284):482–84. doi:10.1136/bmj.322.7284.482.
- Faculty of Psychiatry of Old Age, NSW Branch, RANZCP. Consensus Guidelines for Assessment and Management of Depression in the Elderly [PDF]. North Sydney, New South Wales: NSW Health Department; 2001. ISBN 0-7347-3341-0. p. 2.
- Yohannes AM and Baldwin RC. Medical Comorbidities in Late-Life Depression. Psychiatric Times. 2008;25(14).
- American Psychiatric Association 2000a, p. 354
- Brunsvold GL, Oepen G. Comorbid Depression in ADHD: Children and Adolescents. Psychiatric Times. 2008;25(10).
- Kessler RC, Nelson C, McGonagle KA. Comorbidity of DSM-III-R major depressive disorder in the general population: results from the US National Comorbidity Survey. British Journal of Psychiatry. 1996;168(suppl 30):17–30. PMID 8864145.
- Hirschfeld RMA. The Comorbidity of Major Depression and Anxiety Disorders: Recognition and Management in Primary Care. Primary Care Companion to the Journal of Clinical Psychiatry. 2001;3(6):244–254. PMID 15014592.
- Sapolsky Robert M. Why zebras don't get ulcers. Henry Holt and Company, LLC; 2004. ISBN 0-8050-7369-8. p. 291–98.
- Grant BF. Comorbidity between DSM-IV drug use disorders and major depression: Results of a national survey of adults. Journal of Substance Abuse. 1995;7(4):481–87. doi:10.1016/0899-3289(95)90017-9. PMID 8838629.
- Hallowell EM, Ratey JJ. Delivered from distraction: Getting the most out of life with Attention Deficit Disorder. New York: Ballantine Books; 2005. ISBN 0-345-44231-8. p. 253–55.
- Bair MJ, Robinson RL, Katon W. Depression and Pain Comorbidity: A Literature Review. Archives of Internal Medicine. 2003 [cited 08–11–11];163(20):2433–45. doi:10.1001/archinte.163.20.2433. PMID 14609780.
- Schulman J and Shapiro BA. Depression and Cardiovascular Disease: What Is the Correlation?. Psychiatric Times. 2008;25(9).
- Department of Health and Human Services. The fundamentals of mental health and mental illness [PDF]; 1999 [cited 2008-11-11].
- Caspi A, Sugden K, Moffitt TE. Influence of life stress on depression: Moderation by a polymorphism in the 5-HTT gene. Science. 2003;301(5631):386–89. doi:10.1126/science.1083968. PMID 12869766. Bibcode:2003Sci...301..386C.
- Haeffel GJ; Getchell M; Koposov RA; Yrigollen CM; DeYoung CG; af Klinteberg B; et al.. Association between polymorphisms in the dopamine transporter gene and depression: Evidence for a gene–environment interaction in a sample of juvenile detainees [PDF]; 2008 [cited 2008-11-11].
- Slavich GM. Deconstructing depression: A diathesis-stress perspective (Opinion); 2004 [cited 2008-11-11].
- Kendler KS, Gatz M, Gardner CO, Pedersen NL. A Swedish national twin study of lifetime major depression. American Journal of Psychiatry. 2006;163(1):109–14. doi:10.1176/appi.ajp.163.1.109. PMID 16390897.
- Schuckit MA, Tipp JE, Bergman M, Reich W, Hesselbrock VM, Smith TL. Comparison of induced and independent major depressive disorders in 2,945 alcoholics. Am J Psychiatry. 1997;154(7):948–57. PMID 9210745.
- Professor Heather Ashton. Benzodiazepines: How They Work and How to Withdraw; 2002.
- "All About Depression: Causes". All About Self Help, LLC. Friday, December 3, 2010. Retrieved Friday, December 3, 2010.
- Barlow 2005, p. 226
- Shah N, Eisner T, Farrell M, Raeder C. An overview of SSRIs for the treatment of depression [PDF]; 1999 July/August [cited 2008-11-10].
- Nutt DJ. Relationship of neurotransmitters to the symptoms of major depressive disorder. Journal of Clinical Psychiatry. 2008;69 Suppl E1:4–7. PMID 18494537.
- Krishnan V, Nestler EJ (October 2008). "The molecular neurobiology of depression". Nature 455 (7215): 894–902. Bibcode:2008Natur.455..894K. doi:10.1038/nature07455. PMC 2721780. PMID 18923511.
- Hirschfeld RM. History and evolution of the monoamine hypothesis of depression. Journal of Clinical Psychiatry. 2000;61 Suppl 6:4–6. PMID 10775017.
- Lacasse J, Leo J. Serotonin and depression: A disconnect between the advertisements and the scientific literature. PLoS Med. 2005 [cited 2008-10-30];2(12):e392. doi:10.1371/journal.pmed.0020392. PMID 16268734. PMC 1277931. Lay summary: Medscape, Nov. 8, 2005.
- Caspi A, Sugden K, Moffitt TE, et al.. Influence of life stress on depression: moderation by a polymorphism in the 5-HTT gene. Science. 2003;301(5631):386–9. doi:10.1126/science.1083968. PMID 12869766. Bibcode:2003Sci...301..386C.
- Risch N, Herrell R, Lehner T, et al.. Interaction between the serotonin transporter gene (5-HTTLPR), stressful life events, and risk of depression: a meta-analysis. JAMA. 2009;301(23):2462–71. doi:10.1001/jama.2009.878. PMID 19531786.
- Munafò MR, Durrant C, Lewis G, Flint J. Gene X environment interactions at the serotonin transporter locus. Biol. Psychiatry. 2009;65(3):211–9. doi:10.1016/j.biopsych.2008.06.009. PMID 18691701.
- Uher R, McGuffin P. The moderation by the serotonin transporter gene of environmental adversity in the etiology of depression: 2009 update. Mol. Psychiatry. 2010;15(1):18–22. doi:10.1038/mp.2009.123. PMID 20029411.
- Kempton MJ, Salvador Z, Munafò MR, Geddes JR, Simmons A, Frangou S, Williams SC. (2011). "Structural Neuroimaging Studies in Major Depressive Disorder: Meta-analysis and Comparison With Bipolar Disorder". Arch Gen Psychiatry 68 (7): 675–90. doi:10.1001/archgenpsychiatry.2011.60. PMID 21727252. see also MRI database at www.depressiondatabase.org
- Arnone D, McIntosh AM, Ebmeier KP, Munafò MR, Anderson IM. (July 2011). "Magnetic resonance imaging studies in unipolar depression: Systematic review and meta-regression analyses". Eur Neuropsychopharmacol 22 (1): 1–16. doi:10.1016/j.euroneuro.2011.05.003. PMID 21723712.
- Herrmann LL, Le Masurier M, Ebmeier KP. White matter hyperintensities in late life depression: a systematic review. Journal of Neurology, Neurosurgery, and Psychiatry. 2008;79(6):619–24. doi:10.1136/jnnp.2007.124651. PMID 17717021.
- Mayberg H. Brain pathway may underlie depression. Scientific American. July 6, 2007 [cited 2008-09-13];17(4):26–31.
- Sheline YI, Gado MH, Kraemer HC. Untreated depression and hippocampal volume loss. American Journal of Psychiatry. 2003;160(8):1516–18. doi:10.1176/appi.ajp.160.8.1516. PMID 12900317.
- Duman RS, Heninger GR, Nestler EJ. A molecular and cellular theory of depression. Archives of General Psychiatry. 1997;54(7):597–606. PMID 9236543.
- Drevets WC, Savitz J, Trimble M. The subgenual anterior cingulate cortex in mood disorders. CNS Spectrums. 2008;13(8):663–81. PMID 18704022.
- Sen S, Duman R, Sanacora G. Serum brain-derived neurotrophic factor, depression, and antidepressant medications: Meta-analyses and implications. Biological Psychiatry. 2008;64(6):527–32. doi:10.1016/j.biopsych.2008.05.005. PMID 18571629.
- Monteleone P. (abstract) Endocrine disturbances and psychiatric disorders. Current Opinion in Psychiatry. 2001;14(6):605–10.
- Cutter WJ, Norbury R, Murphy DG. Oestrogen, brain function, and neuropsychiatric disorders. Journal of Neurology, Neurosurgery and Psychiatry. 2003;74(7):837–40. doi:10.1136/jnnp.74.7.837. PMID 12810759.
- Douma, S.L, Husband, C., O'Donnell, M.E., Barwin, B.N., Woodend A.K.. Estrogen-related Mood Disorders Reproductive Life Cycle Factors. Advances in Nursing Science. 2005;28(4):364–375. PMID 16292022.
- Lasiuk, GC and Hegadoren, KM. The Effects of Estradiol on Central Serotonergic Systems and Its Relationship to Mood in Women. Biological Research for Nursing (2007),. 2007;9(2):147–160. doi:10.1177/1099800407305600. PMID 17909167.
- Dantzer R, O'Connor JC, Freund GG, Johnson RW, Kelley KW. From inflammation to sickness and depression: when the immune system subjugates the brain. Nat Rev Neurosci. 2008;9(1):46–56. doi:10.1038/nrn2297. PMID 18073775.
- Dowlati Y, Herrmann N, Swardfager W, Liu H, Sham L, Reim EK, Lanctot KL. A meta-analysis of cytokines in major depression. Biological Psychiatry. 2010;67(5):446–457. doi:10.1016/j.biopsych.2009.09.033. PMID 20015486.
- Müller N, Myint AM, Schwarz MJ (February 2011). "Inflammatory biomarkers and depression". Neurotox Res 19 (2): 308–18. doi:10.1007/s12640-010-9210-2. PMID 20658274.
- Radua, Joaquim; Pertusa, Alberto; Cardoner, Narcis (28 February 2010). "Climatic relationships with specific clinical subtypes of depression". Psychiatry Research 175 (3): 217–220. doi:10.1016/j.psychres.2008.10.025. PMID 20045197.
- Cox, William T. L.; Abramson, Lyn Y.; Devine, Patricia G.; Hollon, Steven D. (2012). "Stereotypes, Prejudice, and Depression: The Integrated Perspective". Perspectives on Psychological Science 7 (5): 427–449. doi:10.1177/1745691612455204.
- Raphael B. Unmet Need for Prevention. In: Andrews G, Henderson S (eds). Unmet Need in Psychiatry:Problems, Resources, Responses. Cambridge University Press; 2000. ISBN 0-521-66229-X. p. 138–39.
- Morris BH, Bylsma LM, Rottenberg J (September 2009). "Does emotion predict the course of major depressive disorder? A review of prospective studies". Br J Clin Psychol 48 (Pt 3): 255–73. doi:10.1348/014466508X396549. PMID 19187578.
- Sadock 2002, p. 541
- McCullough, Michael; Larson, David (1 June 1999). "Religion and depression: a review of the literature". Twin Research (Australian Academic Press) 2 (2): 126–136. doi:10.1375/136905299320565997. PMID 10480747.
- Dein, S. Religion, spirituality and depression: implications for research and treatment [PDF]. Primary Care and Community Psychiatry. 2006 [cited 2008-11-21];11(2):67–72. doi:10.1185/135525706X121110. Archived October 21, 2006 at the Wayback Machine
- Religiousness and mental health: a review. Rev. Bras. Psiquiatr.. September 2006;28(3):242–50. doi:10.1590/S1516-44462006005000006. PMID 16924349.
- Warman DM, Beck AT. About treatment and supports: Cognitive behavioral therapy; 2003 [cited 2008-10-17].
- Beck 1987, pp. 10–15
- Beck 1987, p. 3
- Seligman, M. Helplessness: On depression, development and death. San Francisco, CA, USA: WH Freeman; 1975. ISBN 0-7167-0751-9. Depression. p. 75–106.
- Ma, K. Attachment theory in adult psychiatry. Part 1: Conceptualisations, measurement and clinical research findings. Advances in Psychiatric Treatment. 2006 [cited 2010-04-21];12:440–449.
- Barlow 2005, pp. 230–32
- Pinto A, Francis G. Cognitive correlates of depressive symptoms in hospitalized adolescents. Adolescence. 1993;28(111):661–72. PMID 8237551.
- Bandura A. Self-Efficacy. In: Friedman H. Encyclopedia of mental health. San Diego: Academic Press; 1998 [cited 2008-08-17]. ISBN 0-12-226676-5.
- Brown GW, Harris TO. Social Origins of Depression: A Study of Psychiatric Disorder in Women. Routledge; 2001. ISBN 0-415-20268-X.
- Hinrichsen GA, Emery EE. Interpersonal factors and late-life depression [Subscription required]. Clinical Psychology: Science and Practice. 2006;12(3):264–75. doi:10.1093/clipsy/bpi027.
- Carhart-Harris RL, Mayberg HS, Malizia AL, Nutt D. Mourning and melancholia revisited: Correspondences between principles of Freudian metapsychology and empirical findings in neuropsychiatry. Annals of General Psychiatry. 2008;7:9. doi:10.1186/1744-859X-7-9. PMID 18652673.
- Freud, S (1984). "Mourning and Melancholia". In Richards A. 11.On Metapsychology: The Theory of Psycholoanalysis. Aylesbury, Bucks: Pelican. pp. 245–69. ISBN 0-14-021740-1.
- Radden, J. Is this dame melancholy? Equating today's depression and past melancholia. Philosophy, Psychiatry, & Psychology. 2003;10(1):37–52. doi:10.1353/ppp.2003.0081.
- Frankl VE. Man's search for ultimate meaning. New York, NY, USA: Basic Books; 2000. ISBN 0-7382-0354-8. p. 139–40.
- Geppert CMA. Damage control; 2006 [cited 2008-11-08].
- May 1994, p. 133
- Boeree, CG (1998). "Abraham Maslow: Personality Theories" (PDF). Psychology Department, Shippensburg University. Retrieved 2008-10-27.
- Maslow A. The Farther Reaches of Human Nature. New York, NY, USA: Viking Books; 1971. ISBN 0-670-30853-6. p. 318.
- Heim C, Newport DJ, Mletzko T, Miller AH, Nemeroff CB. The link between childhood trauma and depression: insights from HPA axis studies in humans. Psychoneuroendocrinology. 2008;33(6):693–710. doi:10.1016/j.psyneuen.2008.03.008. PMID 18602762.
- Kessler, RC. The effects of stressful life events on depression. Annual revue of Psychology. 1997;48:191–214. doi:10.1146/annurev.psych.48.1.191. PMID 9046559.
- Kendler, KS. Life event dimensions of loss, humiliation, entrapment, and danger in the prediction of onsets of major depression and generalized anxiety. Archives of General Psychiatry. 2003;60(8):789–796. doi:10.1001/archpsyc.60.8.789. PMID 12912762.
- Slavich GM, Thornton T, Torres LD, Monroe SM, Gotlib IH. Targeted rejection predicts hastened onset of major depression. Journal of Social and Clinical Psychology. 2009;28:223–243. doi:10.1521/jscp.2009.28.2.223.
- Monroe SM, Slavich GM, Torres LD, Gotlib IH. Major life events and major chronic difficulties are differentially associated with history of major depressive episodes. Journal of Abnormal Psychology. 2007;116(1):116–124. doi:10.1037/0021-843X.116.1.116. PMID 17324022.
- Sadock 2002, p. 540
- Vilhjalmsson R. Life stress, social support and clinical depression: A reanalysis of the literature. Social Science & Medicine. 1993;37(3):331–42. doi:10.1016/0277-9536(93)90264-5. PMID 8356482.
- Kim, D (2008). "Blues from the neighborhood? Neighborhood characteristics and depression". Epidemiologic reviews 30: 101–17. doi:10.1093/epirev/mxn009. PMID 18753674.
- Bonde JP. Psychosocial factors at work and risk of depression: A systematic review of the epidemiological evidence. Journal of Occupational and Environmental Medicine. 2008;65(7):438–45. doi:10.1136/oem.2007.038430. PMID 18417557.
- Panksepp J, Moskal JR, Panksepp JB, Kroes RA. Comparative approaches in evolutionary psychology: Molecular neuroscience meets the mind [PDF]. Neuroendocrinology Letters. 2002;23(Supplement 4):105–15. PMID 12496741.
- Sloman L, Gilbert P, Hasey G. Evolved mechanisms in depression: The role and interaction of attachment and social rank in depression. Journal of Affective Disorders. 2003;74(2):107–21. doi:10.1016/S0165-0327(02)00116-7. PMID 12706512.
- Tooby, J, Cosmides, L. Conceptual foundations of evolutionary psychology. In D. M. Buss (Ed.), The Handbook of Evolutionary Psychology [PDF]. Hoboken, NJ: Wiley & Sons; 2005. p. 5–67.
- Carey TJ. Evolution, depression and counselling. Counselling Psychology Quarterly. 2005;18(3):215–22. doi:10.1080/09515070500304508.
- Mashman, RC. An evolutionary view of psychic misery. Journal of Social Behaviour & Personality. 1997;12:979–99.
- Cottencin O (December 2009). "[Severe depression and addictions]". Encephale (in French). 35 Suppl 7: S264–8. doi:10.1016/S0013-7006(09)73483-9. PMID 20141784.
- Falk DE, Yi HY, Hilton ME. Age of onset and temporal sequencing of lifetime DSM-IV alcohol use disorders relative to comorbid mood and anxiety disorders. Drug Alcohol Depend. 2008;94(1–3):234–45. doi:10.1016/j.drugalcdep.2007.11.022. PMID 18215474.
- Boden JM, Fergusson DM (May 2011). "Alcohol and depression". Addiction 106 (5): 906–14. doi:10.1111/j.1360-0443.2010.03351.x. PMID 21382111.
- Kelley KW, Dantzer R (June 2011). "Alcoholism and inflammation: neuroimmunology of behavioral and mood disorders". Brain Behav. Immun. 25 Suppl 1: S13–20. doi:10.1016/j.bbi.2010.12.013. PMID 21193024.
- Riss, J.; Cloyd, J.; Gates, J.; Collins, S. (2008). "Benzodiazepines in epilepsy: pharmacology and pharmacokinetics". Acta Neurol Scand 118 (2): 69–86. doi:10.1111/j.1600-0404.2008.01004.x. PMID 18384456.
- Semple, David; Roger Smyth, Jonathan Burns, Rajan Darjee, Andrew McIntosh (2007) . "13". Oxford Handbook of Psychiatry. United Kingdom: Oxford University Press. p. 540. ISBN 0-19-852783-7.
- Collier, Judith; Longmore, Murray (2003). "4". In Scally, Peter. Oxford Handbook of Clinical Specialties (6 ed.). Oxford University Press. p. 366. ISBN 978-0-19-852518-9.
- Janicak, Philip G.; Marder, Stephen R.; Pavuluri, Mani N. (2011). Principles and practice of psychopharmacotherap. Philadelphia: Wolters Kluwer Health/Lippincott Williams Wilkins. pp. 507–508. ISBN 978-1-60547-565-3.
- Johnson, Bankole A. (2011). Addiction medicine : science and practic. New York: Springer. pp. 301–303. ISBN 978-1-4419-0337-2.
- Marshall BD, Werb D (June 2010). "Health outcomes associated with methamphetamine use among young people: a systematic review". Addiction 105 (6): 991–1002. doi:10.1111/j.1360-0443.2010.02932.x. PMID 20659059.
- Kaufmann IM. Rural psychiatric services. A collaborative model. Canadian Family Physician. September 1, 1993;39:1957–61. PMID 8219844.
- British Broadcasting Corporation (BBC). Call for action over Third World depression; November 1, 1999 [cited 2008-10-11].
- Sharp LK, Lipsky MS. Screening for depression across the lifespan: a review of measures for use in primary care settings. American Family Physician. 2002;66(6):1001–8. PMID 12358212.
- Gilbody S, House AO, Sheldon TA. Screening and case finding instruments for depression. Cochrane Database of Systematic Reviews. 2005;(4):CD002792. doi:10.1002/14651858.CD002792.pub2. PMID 16235301.
- Cepoiu M, McCusker J, Cole MG, Sewitch M, Belzile E, Ciampi A. Recognition of depression by non-psychiatric physicians—a systematic literature review and meta-analysis. J Gen Intern Med. 2008;23(1):25–36. doi:10.1007/s11606-007-0428-5. PMID 17968628.
- Dale J, Sorour E, Milner G. Do psychiatrists perform appropriate physical investigations for their patients? A review of current practices in a general psychiatric inpatient and outpatient setting. Journal of Mental Health. 2008;17(3):293–98. doi:10.1080/09638230701498325.
- Orengo C, Fullerton G, Tan R. Male depression: A review of gender concerns and testosterone therapy. Geriatrics. 2004;59(10):24–30. PMID 15508552.
- Reid LM, Maclullich AM. Subjective memory complaints and cognitive impairment in older people. Dementia and geriatric cognitive disorders. 2006;22(5–6):471–85. doi:10.1159/000096295. PMID 17047326.
- Katz IR. Diagnosis and treatment of depression in patients with Alzheimer's disease and other dementias. The Journal of clinical psychiatry. 1998;59 Suppl 9:38–44. PMID 9720486.
- Wright SL, Persad C. Distinguishing between depression and dementia in older persons: Neuropsychological and neuropathological correlates. Journal of geriatric psychiatry and neurology. 2007;20(4):189–98. doi:10.1177/0891988707308801. PMID 18004006.
- Sadock 2002, p. 108
- Sadock 2002, p. 260
- Hahn T, Marquand AF, Ehlis AC, et al. (December 2010). "Integrating Neurobiological Markers of Depression". Arch. Gen. Psychiatry 68 (4): 361–368. doi:10.1001/archgenpsychiatry.2010.178. PMID 21135315. Retrieved 2011-04-01.
- www.who.int. ICD-10: [cited 2008-11-08].
- Sadock 2002, p. 288
- American Psychiatric Association 2000a, p. xxix
- World Health Organization. The ICD-10 classification of mental and behavioral disorders. Clinical description and diagnostic guideline. Geneva: World Health Organization, 1992
- American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition Revised. Text revision. Psychiatric Press, Inc., Washington, DC: 2000
- American Psychiatric Association 2000a, p. 345
- World Health Organization (WHO). Mood (affective) disorders; 2004 [cited 2008-10-19].
- American Psychiatric Association 2000a, p. 372
- Parker 1996, p. 173
- American Psychiatric Association 2000a, p. 352
- Wakefield JC, Schmitz MF, First MB, Horwitz AV. Extending the bereavement exclusion for major depression to other losses: Evidence from the National Comorbidity Survey. Archives of General Psychiatry. 2007;64(4):433–40. doi:10.1001/archpsyc.64.4.433. PMID 17404120. Lay summary: The Washington Post, 2007-04-03.
- Kendler KS, Gardner CO. Boundaries of major depression: An evaluation of DSM-IV criteria. American Journal of Psychiatry. February 1, 1998;155(2):172–77. PMID 9464194.
- Sadock 2002, p. 552
- American Psychiatric Association 2000a, p. 778
- Carta MG, Altamura AC, Hardoy MC. Is recurrent brief depression an expression of mood spectrum disorders in young people?. European Archives of Psychiatry and Clinical Neuroscience. 2003;253(3):149–53. doi:10.1007/s00406-003-0418-5. PMID 12904979.
- Rapaport MH, Judd LL, Schettler PJ. A descriptive analysis of minor depression. American Journal of Psychiatry. 2002;159(4):637–43. doi:10.1176/appi.ajp.159.4.637. PMID 11925303.
- American Psychiatric Association 2000a, p. 355
- American Psychiatric Association 2000a, pp. 419–20
- American Psychiatric Association 2000a, pp. 421–22
- American Psychiatric Association 2000a, pp. 417–18
- www.who.int. ICD-10: [cited 2008-11-06].
- Nonacs, Ruta M. eMedicine. Postpartum depression; December 4, 2007 [cited 2008-10-30].
- American Psychiatric Association 2000a, p. 425
- Akiskal HS, Benazzi F. The DSM-IV and ICD-10 categories of recurrent [major] depressive and bipolar II disorders: Evidence that they lie on a dimensional spectrum. Journal of Affective Disorders. 2006;92(1):45–54. doi:10.1016/j.jad.2005.12.035. PMID 16488021.
- "Major Depressive Episode". psychnet-uk.com. Retrieved 2010-07-16.
- Cuijpers P, van Straten A, Smit F, Mihalopoulos C, Beekman A. Preventing the onset of depressive disorders: a meta-analytic review of psychological interventions. Am J Psychiatry. 2008;165(10):1272–80. doi:10.1176/appi.ajp.2008.07091422. PMID 18765483.
- Christensen H; Griffiths KM.. The prevention of depression using the Internet [PDF]; 2002 [cited 2009-04-02].
- Jané-Llopis E; Hosman C; Jenkins R; Anderson P.. Predictors of efficacy in depression prevention programmes [PDF]; 2003 [cited 2009-04-02].
- Cuijpers P, Muñoz RF, Clarke GN, Lewinsohn PM. Psychoeducational treatment and prevention of depression: the "Coping with Depression" course thirty years later.. Clinical Psychology Review. 2009;29(5):449–458. doi:10.1016/j.cpr.2009.04.005. PMID 19450912.
- Muñoz, RF; Beardslee, WR; Leykin, Y (2012 May-Jun). "Major depression can be prevented". The American psychologist 67 (4): 285–95. doi:10.1037/a0027666. PMID 22583342.
- Patel V, Araya R, Bolton P. Editorial: Treating depression in the developing world [Subscription required]. Tropical Medicine & International Health. 2004;9(5):539–41. doi:10.1111/j.1365-3156.2004.01243.x. PMID 15117296.
- National Institute for Health and Clinical Excellence. Management of depression in primary and secondary care [PDF]; 2007 [cited 2008-11-04].
- Mead GE, Morley W, Campbell P, Greig CA, McMurdo M, Lawlor DA (2009). Exercise for depression. In Mead, Gillian E. "Cochrane Database of Systematic Reviews". Cochrane Database Syst Rev (3): CD004366. doi:10.1002/14651858.CD004366.pub4. PMID 19588354.
- Thase, ME. When are psychotherapy and pharmacotherapy combinations the treatment of choice for major depressive disorder?. Psychiatric Quarterly. 1999;70(4):333–46. doi:10.1023/A:1022042316895. PMID 10587988.
- Childhood Depression. abct.org. Last updated: 30 July 2010
- NICE. NICE guidelines: Depression in children and adolescents. London: NICE; 2005 [cited 2008-08-16]. ISBN 1-84629-074-0. p. 5.
- Wilson KC, Mottram PG, Vassilas CA. Psychotherapeutic treatments for older depressed people. Cochrane Database of Systematic Reviews. 2008;23(1):CD004853. doi:10.1002/14651858.CD004853.pub2. PMID 18254062.
- Cuijpers P, van Straten A, Smit F. Psychological treatment of late-life depression: a meta-analysis of randomized controlled trials. International Journal of Geriatric Psychiatry. 2006;21(12):1139–49. doi:10.1002/gps.1620. PMID 16955421.
- Dobson KS. A meta-analysis of the efficacy of cognitive therapy for depression. J Consult Clin Psychol. 1989;57(3):414–9. doi:10.1037/0022-006X.57.3.414. PMID 2738214.
- Roth, Anthony; Fonagy, Peter (2005) . What Works for Whom? Second Edition: A Critical Review of Psychotherapy Research. Guilford Press. p. 78. ISBN 1-59385-272-X.
- Klein, Jesse. Review: Cognitive behavioural therapy for adolescents with depression. Evidence-Based Mental Health. 2008 [cited 2008-11-27];11(3):76. doi:10.1136/ebmh.11.3.76. PMID 18669678.
- Harrington R, Whittaker J, Shoebridge P, Campbell F. Systematic review of efficacy of cognitive behaviour therapies in childhood and adolescent depressive disorder. BMJ. 1998;325(7358):229–30. doi:10.1136/bmj.325.7358.229. PMID 9596592.
- Cognitive-Behavioral Therapy for Adolescent Depression: Processes of Cognitive Change. Psychiatric Times. 2008;25(14).
- Cognitive-behavioral therapy in prevention of depression relapses and recurrences: a review. Revista brasileira de psiquiatria (Sao Paulo, Brazil : 1999). 2003;25(4):239–44. PMID 15328551.
- Cognitive therapy in relapse prevention in depression.. The international journal of neuropsychopharmacology / official scientific journal of the Collegium Internationale Neuropsychopharmacologicum (CINP). 2007;10(1):131–6. doi:10.1017/S1461145706006912. PMID 16787553.
- Beck 1987, p. 10
- Coelho HF, Canter PH, Ernst E. Mindfulness-based cognitive therapy: Evaluating current evidence and informing future research. Journal of Consulting and Clinical Psychology. 2007;75(6):1000–05. doi:10.1037/0022-006X.75.6.1000. PMID 18085916.
- Dworetzky J. Psychology. Pacific Grove, CA, USA: Brooks/Cole Pub. Co; 1997. ISBN 0-314-20412-1. p. 602.
- Doidge N, Simon B, Lancee WJ. Psychoanalytic patients in the US, Canada, and Australia: II. A DSM-III-R validation study. Journal of the American Psychoanalytic Association. 2002;50(2):615–27. doi:10.1177/00030651020500021101. PMID 12206545.
- Barlow 2005, p. 20
- de Maat S, Dekker J, Schoevers R. Short Psychodynamic Supportive Psychotherapy, antidepressants, and their combination in the treatment of major depression: A mega-analysis based on three Randomized Clinical Trials. Depression and Anxiety. 2007;25(7):565. doi:10.1002/da.20305. PMID 17557313.
- Blair RG. Helping older adolescents search for meaning in depression; 2004 [cited 2008-11-06].
- The sertraline prescriptions were calculated as a total of prescriptions for Zoloft and generic Sertraline using data from the charts for generic and brand name drugs, see: Verispan (2008-02-18). "Top 200 Generic Drugs by Units in 2007" (PDF). Drug Topics. Retrieved 2008-03-30. and Verispan (2008-02-18). "Top 200 Brand Drugs by Units in 2007" (PDF). Drug Topics. Retrieved 2008-03-30.
- Fournier JC, DeRubeis RJ, Hollon SD, et al. (January 2010). "Antidepressant drug effects and depression severity: a patient-level meta-analysis". JAMA 303 (1): 47–53. doi:10.1001/jama.2009.1943. PMID 20051569.
- Cuijpers P, van Straten A, van Oppen P, Andersson G. Are psychological and pharmacologic interventions equally effective in the treatment of adult depressive disorders? A meta-analysis of comparative studies. Journal of Clinical Psychiatry. 2008;69(11):1675–85. doi:10.4088/JCP.v69n1102. PMID 18945396.
- Cuijpers P, van Straten A, Schuurmans J, van Oppen P, Hollon SD, Andersson G.. Psychotherapy for chronic major depression and dysthymia: a meta-analysis.. Clinical Psychology Review. 2010;30(1):51–62. doi:10.1016/j.cpr.2009.09.003. PMID 19766369.
- Karasu TB, Gelenberg A, Merriam A, Wang P. Practice Guideline for the Treatment of Patients With Major Depressive Disorder (Second Edition). Am J Psychiatry. 2000;157(4 Suppl):1–45. PMID 10767867.; Third edition doi:10.1176/appi.books.9780890423363.48690
- Thase, M. Preventing relapse and recurrence of depression: a brief review of therapeutic options. CNS spectrums. 2006;11(12 Suppl 15):12–21. PMID 17146414.
- Royal Pharmaceutical Society of Great Britain 2008, p. 204
- Whooley MA, Simon GE. (abstract) Managing Depression in Medical Outpatients. New England Journal of Medicine. 2000 [cited 2008-11-11];343(26):1942–50. doi:10.1056/NEJM200012283432607. PMID 11136266.
- Zisook S, Rush AJ, Haight BR, Clines DC, Rockett CB. Use of bupropion in combination with serotonin reuptake inhibitors. Biological Psychiatry. 2006;59(3):203–10. doi:10.1016/j.biopsych.2005.06.027. PMID 16165100.
- Rush AJ, Trivedi MH, Wisniewski SR. Bupropion-SR, sertraline, or venlafaxine-XR after failure of SSRIs for depression. New England Journal of Medicine. 2006;354(12):1231–42. doi:10.1056/NEJMoa052963. PMID 16554525.
- Trivedi MH, Fava M, Wisniewski SR, Thase ME, Quitkin F, Warden D, Ritz L, Nierenberg AA, Lebowitz BD, Biggs MM, Luther JF, Shores-Wilson K, Rush AJ. Medication augmentation after the failure of SSRIs for depression. New England Journal of Medicine. 2006;354(12):1243–52. doi:10.1056/NEJMoa052964. PMID 16554526.
- Papakostas GI, Thase ME, Fava M, Nelson JC, Shelton RC. Are antidepressant drugs that combine serotonergic and noradrenergic mechanisms of action more effective than the selective serotonin reuptake inhibitors in treating major depressive disorder? A meta-analysis of studies of newer agents. Biological Psychiatry. 2007;62(11):1217–27. doi:10.1016/j.biopsych.2007.03.027. PMID 17588546.
- Prof Gordon Duff. The Medicines and Healthcare products Regulatory Agency (MHRA); 31 May 2006.
- Depression in children and young people: Identification and management in primary, community and secondary care. 2005 [cited 2008-08-17].
- Mayers AG, Baldwin DS. Antidepressants and their effect on sleep. Human Psychopharmacology. 2005;20(8):533–59. doi:10.1002/hup.726. PMID 16229049.
- Forest Laboratories. Lexapro Prescribing Information for the U.S. [PDF]; March 2009 [cited 2009-04-09].
- Tsapakis EM, Soldani F, Tondo L, Baldessarini RJ. Efficacy of antidepressants in juvenile depression: meta-analysis. Br J Psychiatry. 2008;193(1):10–7. doi:10.1192/bjp.bp.106.031088. PMID 18700212.
- Nelson, JC; Devanand, DP (2011 Apr). "A systematic review and meta-analysis of placebo-controlled antidepressant studies in people with depression and dementia". Journal of the American Geriatrics Society 59 (4): 577–85. doi:10.1111/j.1532-5415.2011.03355.x. PMID 21453380.
- Palmer B, Gates J, Lader M. Causes and Management of Hyponatremia. The Annals of Pharmacotherapy. 2003;37(11):1694–702. doi:10.1345/aph.1D105. PMID 14565794.
- Guaiana G., Barbui C., Hotopf M.. Amitriptyline for depression. Cochrane Database Syst Review. 2007;18(3):11–7. doi:10.1002/14651858.CD004186.pub2. PMID 17636748.
- Anderson IM. Selective serotonin reuptake inhibitors versus tricyclic antidepressants: A meta-analysis of efficacy and tolerability. Journal of Affective Disorders. 2000;58(1):19–36. doi:10.1016/S0165-0327(99)00092-0. PMID 10760555.
- Krishnan KR. Revisiting monoamine oxidase inhibitors. Journal of Clinical Psychiatry. 2007;68 Suppl 8:35–41. PMID 17640156.
- Bonnet U (2003). "Moclobemide: therapeutic use and clinical studies". CNS Drug Rev 9 (1): 97–140. doi:10.1111/j.1527-3458.2003.tb00245.x. PMID 12595913.
- Wijeratne, Chanaka, Sachdev, Perminder. Treatment-resistant depression: Critique of current approaches. Australian and New Zealand Journal of Psychiatry. 2008;42(9):751–62. doi:10.1080/00048670802277206. PMID 18696279.
- Barbee JG. Treatment-Resistant Depression: Advances in Assessment. Psychiatric Times. 2008;25(10).
- Langreth, Robert (2010-01-05). "Study Undermines Case for Antidepressants". Forbes. Retrieved 2010-07-02.
- Stone MB, Jones ML (2006-11-17). "Clinical review: relationship between antidepressant drugs and suicidality in adults" (PDF). Overview for December 13 Meeting of Psychopharmacologic Drugs Advisory Committee (PDAC). FDA. pp. 11–74. Retrieved 2007-09-22.
- Levenson M, Holland C (2006-11-17). "Statistical Evaluation of Suicidality in Adults Treated with Antidepressants" (PDF). Overview for December 13 Meeting of Psychopharmacologic Drugs Advisory Committee (PDAC). FDA. pp. 75–140. Retrieved 2007-09-22.
- Olfson M, Marcus SC, Shaffer D. Antidepressant drug therapy and suicide in severely depressed children and adults: A case-control study. Archives of General Psychiatry. 2006;63(8):865–72. doi:10.1001/archpsyc.63.8.865. PMID 16894062.
- Hammad TA (2004-08-116). "Review and evaluation of clinical data. Relationship between psychiatric drugs and pediatric suicidality" (PDF). FDA. pp. 42; 115. Retrieved 2008-05-29.
- Hetrick S, Merry S, McKenzie J, Sindahl P, Proctor M. Selective serotonin reuptake inhibitors (SSRIs) for depressive disorders in children and adolescents. Cochrane Database Syst Rev. 2007;(3):CD004851. doi:10.1002/14651858.CD004851.pub2. PMID 17636776.
- Gunnell D, Saperia J, Ashby D. Selective serotonin reuptake inhibitors (SSRIs) and suicide in adults: meta-analysis of drug company data from placebo controlled, randomised controlled trials submitted to the MHRA's safety review. BMJ. 2005;330(7488):385. doi:10.1136/bmj.330.7488.385. PMID 15718537.
- Fergusson D, Doucette S, Glass KC, et al.. Association between suicide attempts and selective serotonin reuptake inhibitors: systematic review of randomised controlled trials. BMJ. 2005;330(7488):396. doi:10.1136/bmj.330.7488.396. PMID 15718539.
- Stone, M; Laughren, T, Jones, ML, Levenson, M, Holland, PC, Hughes, A, Hammad, TA, Temple, R, Rochester, G (2009-08-11). "Risk of suicidality in clinical trials of antidepressants in adults: analysis of proprietary data submitted to US Food and Drug Administration". BMJ (Clinical research ed.) 339: b2880. doi:10.1136/bmj.b2880. PMC 2725270. PMID 19671933.
- Rihmer Z, Akiskal H. Do antidepressants t(h)reat(en) depressives? Toward a clinically judicious formulation of the antidepressant-suicidality FDA advisory in light of declining national suicide statistics from many countries. J Affect Disord. 2006;94(1–3):3–13. doi:10.1016/j.jad.2006.04.003. PMID 16712945.
- Sakinofsky, I (2007 Jun). "Treating suicidality in depressive illness. Part I: current controversies". Canadian Journal of Psychiatry 52 (6 Suppl 1): 71S–84S. PMID 17824354.
- FDA. FDA Proposes New Warnings About Suicidal Thinking, Behavior in Young Adults Who Take Antidepressant Medications; 2007-05-02 [cited 2008-05-29].
- Medics and Foods Department (in Japanese) (PDF). Pharmaceuticals and Medical Devices Safety Information (Report). 261. Ministry of Health, Labour and Welfare (Japan). http://www1.mhlw.go.jp/kinkyu/iyaku_j/iyaku_j/anzenseijyouhou/261.pdf.
- Sublette ME, Ellis SP, Geant AL, Mann JJ (September 2011). "Meta-analysis of the effects of eicosapentaenoic acid (EPA) in clinical trials in depression". J Clin Psychiatry 72 (12): 1577–84. doi:10.4088/JCP.10m06634. PMID 21939614.
- Bloch MH, Hannestad J (September 2011). "Omega-3 fatty acids for the treatment of depression: systematic review and meta-analysis". Mol Psychiatry 17 (12): 1272–82. doi:10.1038/mp.2011.100. PMID 21931319.
- American Psychiatric Association. Practice guideline for the treatment of patients with major depressive disorder. American Journal of Psychiatry. 2000b;157(Supp 4):1–45. PMID 10767867.
- UK ECT Review Group. Efficacy and safety of electroconvulsive therapy in depressive disorders: a systematic review and meta-analysis. Lancet. 2003;361(9360):799–808. doi:10.1016/S0140-6736(03)12705-5. PMID 12642045.
- Prudic J, Olfson M, Marcus SC, Fuller RB, Sackeim HA. Effectiveness of electroconvulsive therapy in community settings. Biological Psychiatry. 2004;55(3):301–12. doi:10.1016/j.biopsych.2003.09.015. PMID 14744473.
- Bourgon LN, Kellner CH. Relapse of depression after ECT: a review. The journal of ECT. 2000;16(1):19–31. doi:10.1097/00124509-200003000-00003. PMID 10735328.
- Sackeim HA, Haskett RF, Mulsant BH. Continuation pharmacotherapy in the prevention of relapse following electroconvulsive therapy: A randomized controlled trial. JAMA: Journal of the American Medical Association. 2001;285(10):1299–307. doi:10.1001/jama.285.10.1299. PMID 11255384.
- Tew JD, Mulsant BH, Haskett RF, Joan P, Begley AE, Sackeim HA. Relapse during continuation pharmacotherapy after acute response to ECT: A comparison of usual care versus protocolized treatment. Annals of Clinical Psychiatry. 2007;19(1):1–4. doi:10.1080/10401230601163360. PMID 17453654.
- Frederikse M, Petrides G, Kellner C. Continuation and maintenance electroconvulsive therapy for the treatment of depressive illness: a response to the National Institute for Clinical Excellence report. The journal of ECT. 2006;22(1):13–7. doi:10.1097/00124509-200603000-00003. PMID 16633200.
- National Institute for Clinical Excellence. Guidance on the use of electroconvulsive therapy [PDF]. London: National Institute for Health and Clinical Excellence; 2003. ISBN 1-84257-282-2.
- Kellner CH, Knapp RG, Petrides G. Continuation electroconvulsive therapy vs pharmacotherapy for relapse prevention in major depression: A multisite study from the Consortium for Research in Electroconvulsive Therapy (CORE). Archives of General Psychiatry. 2006;63(12):1337–44. doi:10.1001/archpsyc.63.12.1337. PMID 17146008.
- Barlow 2005, p. 239
- Ingram A, Saling MM, Schweitzer I. Cognitive Side Effects of Brief Pulse Electroconvulsive Therapy: A Review. Journal of ECT. 2008;24(1):3–9. doi:10.1097/YCT.0b013e31815ef24a. PMID 18379328.
- Reisner AD. The electroconvulsive therapy controversy: evidence and ethics [PDF]. Neuropsychology review. 2003;13(4):199–219. doi:10.1023/B:NERV.0000009484.76564.58. PMID 15000226.
- Posternak MA, Miller I. Untreated short-term course of major depression: A meta-analysis of outcomes from studies using wait-list control groups. Journal of Affective Disorders. 2001;66(2–3):139–46. doi:10.1016/S0165-0327(00)00304-9. PMID 11578666.
- Posternak MA, Solomon DA, Leon AC. The naturalistic course of unipolar major depression in the absence of somatic therapy. Journal of Nervous and Mental Disease. 2006;194(5):324–29. doi:10.1097/01.nmd.0000217820.33841.53. PMID 16699380.
- Fava GA, Park SK, Sonino N. Treatment of recurrent depression.. Expert Review of Neurotherapeutics. 2006;6(11):1735–1740. doi:10.1586/14737126.96.36.1995. PMID 17144786.
- Limosin F, Mekaoui L, Hautecouverture S. Stratégies thérapeutiques prophylactiques dans la dépression unipolaire [Prophylactic treatment for recurrent major depression]. La Presse Médicale. 2007;36(11-C2):1627–1633. doi:10.1016/j.lpm.2007.03.032. PMID 17555914.
- Eaton WW, Shao H, Nestadt G. Population-based study of first onset and chronicity in major depressive disorder. Archives of General Psychiatry. 2008;65(5):513–20. doi:10.1001/archpsyc.65.5.513. PMID 18458203.
- Holma KM, Holma IA, Melartin TK. Long-term outcome of major depressive disorder in psychiatric patients is variable. Journal of Clinical Psychiatry. 2008;69(2):196–205. doi:10.4088/JCP.v69n0205. PMID 18251627.
- Kanai T, Takeuchi H, Furukawa TA. Time to recurrence after recovery from major depressive episodes and its predictors. Psychological Medicine. 2003;33(5):839–45. doi:10.1017/S0033291703007827. PMID 12877398.
- Geddes JR, Carney SM, Davies C. Relapse prevention with antidepressant drug treatment in depressive disorders: A systematic review. Lancet. 2003;361(9358):653–61. doi:10.1016/S0140-6736(03)12599-8. PMID 12606176.
- "Major Depression". Retrieved 2010-07-16.
- "Prognosis". Retrieved 2010-07-16.
- Cassano P, Fava M. Depression and public health: an overview. J Psychosom Res. 2002;53(4):849–57. doi:10.1016/S0022-3999(02)00304-5. PMID 12377293.
- Rush AJ. The varied clinical presentations of major depressive disorder. The Journal of clinical psychiatry. 2007;68(Supplement 8):4–10. PMID 17640152.
- Alboni P, Favaron E, Paparella N, Sciammarella M, Pedaci M. Is there an association between depression and cardiovascular mortality or sudden death?. Journal of cardiovascular medicine (Hagerstown, Md.). 2008;9(4):356–62. PMID 18334889.
- Blair-West GW, Mellsop GW. Major depression: Does a gender-based down-rating of suicide risk challenge its diagnostic validity?. Australian and New Zealand Journal of Psychiatry. 2001;35(3):322–28. doi:10.1046/j.1440-1614.2001.00895.x. PMID 11437805.
- Oquendo MA, Bongiovi-Garcia ME, Galfalvy H. Sex differences in clinical predictors of suicidal acts after major depression: a prospective study. The American journal of psychiatry. 2007;164(1):134–41. doi:10.1176/appi.ajp.164.1.134. PMID 17202555.
- Bostwick, JM. Affective disorders and suicide risk: A reexamination. American Journal of Psychiatry. 2000;157(12):1925–32. doi:10.1176/appi.ajp.157.12.1925. PMID 11097952.
- Weich S, Lewis G. (fulltext) Poverty, unemployment, and common mental disorders: Population based cohort study. BMJ. 1998 [cited 2008-09-16];317(7151):115–19. PMID 9657786. PMC 28602.
- Mathers CD, Loncar D. Projections of global mortality and burden of disease from 2002 to 2030. PLoS Med.. 2006;3(11):e442. doi:10.1371/journal.pmed.0030442. PMID 17132052.
- Andrews G. (fulltext) In Review: Reducing the Burden of Depression. Canadian Journal of Psychiatry. 2008 [cited 08–11–10];53(7):420–27. PMID 18674396.
- WHO Disease and injury country estimates; 2009 [cited 11 November 2009].
- World Health Organization. The world health report 2001 – Mental Health: New Understanding, New Hope; 2001 [cited 2008-10-19].
- Vos, T (2012 Dec 15). "Years lived with disability (YLDs) for 1160 sequelae of 289 diseases and injuries 1990-2010: a systematic analysis for the Global Burden of Disease Study 2010.". Lancet 380 (9859): 2163–96. PMID 23245607.
- Andrade L, Caraveo-A.. Epidemiology of major depressive episodes: Results from the International Consortium of Psychiatric Epidemiology (ICPE) Surveys . Int J Methods Psychiatr Res. 2003;12(1):3–21. doi:10.1002/mpr.138. PMID 12830306.
- Kessler RC, Berglund P, Demler O, Jin R, Merikangas KR, Walters EE. Lifetime prevalence and age-of-onset distributions of DSM-IV disorders in the National Comorbidity Survey Replication. Archives of General Psychiatry. 2005;62(6):593–602. doi:10.1001/archpsyc.62.6.593. PMID 15939837.
- Murphy JM, Laird NM, Monson RR, Sobol AM, Leighton AH. A 40-year perspective on the prevalence of depression: The Stirling County Study. Archives of General Psychiatry. 2000;57(3):209–15. doi:10.1001/archpsyc.57.3.209. PMID 10711905.
- Gender differences in unipolar depression: An update of epidemiological findings and possible explanations. Acta Psychiatrica Scandinavica. 2003;108(3):163–74. doi:10.1034/j.1600-0447.2003.00204.x. PMID 12890270.
- Eaton WW, Anthony JC, Gallo J. Natural history of diagnostic interview schedule/DSM-IV major depression. The Baltimore Epidemiologic Catchment Area follow-up. Archives of General Psychiatry. 1997;54(11):993–99. PMID 9366655.
- Rickards H. Depression in neurological disorders: Parkinson's disease, multiple sclerosis, and stroke. Journal of Neurology Neurosurgery and Psychiatry. 2005;76:i48–i52. doi:10.1136/jnnp.2004.060426. PMID 15718222. PMC 1765679.
- Strik JJ, Honig A, Maes M. Depression and myocardial infarction: relationship between heart and mind. Progress in neuro-psychopharmacology & biological psychiatry. 2001;25(4):879–92. doi:10.1016/S0278-5846(01)00150-6. PMID 11383983.
- Jorm AF. Does old age reduce the risk of anxiety and depression? A review of epidemiological studies across the adult life span. Psychological Medicine. 2000;30(1):11–22. doi:10.1017/S0033291799001452. PMID 10722172.
- Gelder, M., Mayou, R. and Geddes, J. 2005. Psychiatry. 3rd ed. New York: Oxford. pp105.
- Hippocrates, Aphorisms, Section 6.23
- depress. (n.d.). Online Etymology Dictionary. Retrieved June 30, 2008, from Dictionary.com
- Wolpert, L. Malignant Sadness: The Anatomy of Depression; 1999 [cited 2008-10-30].
- Berrios GE. Melancholia and depression during the 19th century: A conceptual history. British Journal of Psychiatry. 1988;153:298–304. doi:10.1192/bjp.153.3.298. PMID 3074848.
- Historical aspects of mood disorders. Psychiatry. 2006;5(4):115–18. doi:10.1383/psyt.2006.5.4.115.
- Lewis, AJ. Melancholia: A historical review. Journal of Mental Science. 1934;80:1–42. doi:10.1192/bjp.80.328.1.
- American Psychiatric Association. Diagnostic and statistical manual of mental disorders: DSM-II [PDF]. Washington, DC: American Psychiatric Publishing, Inc.; 1968 [cited 2008-08-03]. Schizophrenia. p. 36–37, 40.
- Schildkraut, JJ. The catecholamine hypothesis of affective disorders: A review of supporting evidence. American Journal of Psychiatry. 1965;122(5):509–22. doi:10.1176/appi.ajp.122.5.509. PMID 5319766.
- Spitzer RL, Endicott J, Robins E. The development of diagnostic criteria in psychiatry [PDF]; 1975 [cited 2008-11-08].
- Philipp M, Maier W, Delmo CD. The concept of major depression. I. Descriptive comparison of six competing operational definitions including ICD-10 and DSM-III-R. European Archives of Psychiatry and Clinical Neuroscience. 1991;240(4–5):258–65. doi:10.1007/BF02189537. PMID 1829000.
- Gruenberg, A.M., Goldstein, R.D., Pincus, H.A. (2005). "Classification of Depression: Research and Diagnostic Criteria: DSM-IV and ICD-10". wiley.com. Retrieved October 30, 2008.
- Bolwig, Tom G.. Melancholia: Beyond DSM, beyond neurotransmitters. Proceedings of a conference, May 2006, Copenhagen, Denmark. Acta Psychiatrica Scandinavica Suppl. 2007;115(433):4–183. doi:10.1111/j.1600-0447.2007.00956.x. PMID 17280564.
- Fink M, Bolwig TG, Parker G, Shorter E. Melancholia: Restoration in psychiatric classification recommended. Acta Psychiatrica Scandinavica. 2007;115(2):89–92. doi:10.1111/j.1600-0447.2006.00943.x. PMID 17244171.
- The Antidepressant Era. Cambridge, MA: Harvard University Press; 1999. ISBN 0-674-03958-0. p. 42.
- Wolf, Joshua "Lincoln's Great Depression", The Atlantic, October 2005, Retrieved October 10, 2009
- Maloney F. Washington Post. The Depression Wars: Would Honest Abe Have Written the Gettysburg Address on Prozac?; November 3, 2005 [cited 2008-10-03].
- Karasz A. Cultural differences in conceptual models of depression. Social Science in Medicine. 2005;60(7):1625–35. doi:10.1016/j.socscimed.2004.08.011. PMID 15652693.
- Tilbury, F. There are orphans in Africa still looking for my hands': African women refugees and the sources of emotional distress. Health Sociology Review. 2004 [cited 2008-10-03];13(1):54–64. doi:10.5555/hesr.2004.13.1.54.
- Parker, G. Depression in the planet's largest ethnic group: The Chinese. American Journal of Psychiatry. 2001;158(6):857–64. doi:10.1176/appi.ajp.158.6.857. PMID 11384889.
- Parker, G. Is depression overdiagnosed? Yes. BMJ. 2007;335(7615):328. doi:10.1136/bmj.39268.475799.AD. PMID 17703040. PMC 1949440.
- Pilgrim D, Bentall R. The medicalisation of misery: A critical realist analysis of the concept of depression. Journal of Mental Health. 1999;8(3):261–74. doi:10.1080/09638239917580.
- Steibel W (Producer). Is depression a disease?; 1998 [cited 2008-11-16].
- Blazer DG. The age of melancholy: "Major depression" and its social origins. New York, NY, USA: Routledge; 2005. ISBN 978-0-415-95188-3.
- Hillman J (T Moore, Ed.). A blue fire: Selected writings by James Hillman. New York, NY, USA: Harper & Row; 1989. ISBN 0-06-016132-9. p. 152–53.
- Mary Shelley. Grove Press; 2002. ISBN 0-8021-3948-5. p. 560–61.
- pbs.org. Biography of Henry James [cited 2008-08-19].
- Burlingame, Michael. The Inner World of Abraham Lincoln. Urbana: University of Illinois Press; 1997. ISBN 0-252-06667-7.
- Pita E. An Intimate Conversation with...Leonard Cohen; 2001-09-26 [cited 2008-10-03].
- Jeste ND, Palmer BW, Jeste DV. Tennessee Williams. American Journal of Geriatric Psychiatry. 2004;12(4):370–75. doi:10.1176/appi.ajgp.12.4.370. PMID 15249274.
- James H (Ed.). Letters of William James (Vols. 1 and 2). Montana USA: Kessinger Publishing Co; 1920. ISBN 978-0-7661-7566-2. p. 147–48.
- Hergenhahn 2005, p. 311
- Cohen D. J. B. Watson: The Founder of Behaviourism. London, UK: Routledge & Kegan Paul; 1979. ISBN 0-7100-0054-5. p. 7.
- Andreasen NC. The relationship between creativity and mood disorders. Dialogues in clinical neuroscience. 2008;10(2):251–5. PMID 18689294.
- Simonton, DK. Are genius and madness related? Contemporary answers to an ancient question. Psychiatric Times. 2005;22(7).
- Heffernan CF. The melancholy muse: Chaucer, Shakespeare and early medicine. Pittsburgh, PA, USA: Duquesne University Press; 1996. ISBN 0-8207-0262-5.
- Mill JS. Autobiography [txt]. Project Gutenberg EBook; 2003 [cited 2008-08-09]. ISBN 1-4212-4200-1. A crisis in my mental history: One stage onward. p. 1826–32.
- Sterba R. The 'Mental Crisis' of John Stuart Mill. Psychoanalytic Quarterly. 1947 [cited 2008-11-05];16(2):271–72.
- Black Dog Institute. Churchill’s Black Dog?: The History of the ‘Black Dog’ as a Metaphor for Depression [PDF]; 2005 [cited 2008-08-18].
- Jorm AF, Angermeyer M, Katschnig H. Public knowledge of and attitudes to mental disorders: a limiting factor in the optimal use of treatment services. In: Andrews G, Henderson S (eds). Unmet Need in Psychiatry:Problems, Resources, Responses. Cambridge University Press; 2000. ISBN 0-521-66229-X. p. 409.
- Paykel ES, Tylee A, Wright A, Priest RG, Rix S, Hart D. The Defeat Depression Campaign: psychiatry in the public arena. American Journal of Psychiatry. 1997;154(6 Suppl):59–65. PMID 9167546.
- Paykel ES, Hart D, Priest RG. Changes in public attitudes to depression during the Defeat Depression Campaign. British Journal of Psychiatry. 1998;173:519–22. doi:10.1192/bjp.173.6.519. PMID 9926082.
Selected cited works
- American Psychiatric Association. Diagnostic and statistical manual of mental disorders, Fourth Edition, Text Revision: DSM-IV-TR. Washington, DC: American Psychiatric Publishing, Inc.; 2000a. ISBN 0-89042-025-4.
- Barlow DH. Abnormal psychology: An integrative approach (5th ed.). Belmont, CA, USA: Thomson Wadsworth; 2005. ISBN 0-534-63356-0.
- Beck AT, Rush J, Shaw BF, Emery G. Cognitive Therapy of depression. New York, NY, USA: Guilford Press; 1987. ISBN 0-89862-919-5.
- Simon, Karen Michele; Freeman, Arthur M.; Epstein, Norman (1986). Depression in the family. New York: Haworth Press. ISBN 0-86656-624-4.
- Hergenhahn BR. An Introduction to the History of Psychology. 5th ed. Belmont, CA, USA: Thomson Wadsworth; 2005. ISBN 0-534-55401-6.
- May R. The discovery of being: Writings in existential psychology. New York, NY, USA: W. W. Norton & Company; 1994. ISBN 0-393-31240-2.
- Hadzi-Pavlovic, Dusan; Parker, Gordon. Melancholia: a disorder of movement and mood: a phenomenological and neurobiological review. Cambridge, UK: Cambridge University Press; 1996. ISBN 0-521-47275-X.
- Royal Pharmaceutical Society of Great Britain. British National Formulary (BNF 56). UK: BMJ Group and RPS Publishing; 2008. ISBN 978-0-85369-778-7.
- Sadock, Virginia A.; Sadock, Benjamin J.; Kaplan, Harold I.. Kaplan & Sadock's synopsis of psychiatry: behavioral sciences/clinical psychiatry. Philadelphia: Lippincott Williams & Wilkins; 2003. ISBN 0-7817-3183-6. | 1 | 81 |
<urn:uuid:6bef4df5-3e53-4da3-92c9-d0c0eab6876e> | What Is It?
An implantable cardioverter defibrillator (ICD) is a potentially lifesaving medical device that is placed inside the body. An ICD treats life-threatening abnormal heart rhythms (called arrhythmias), including ventricular fibrillation, which makes the heart's large muscular chambers (the ventricles) quiver without actually squeezing and pumping. When this happens, there is no real heartbeat and not enough blood flows to the brain or other organs, including the heart. As a result, a person with ventricular fibrillation passes out and can die within minutes.
An ICD is made of two parts. The pulse generator looks like a small box. It is implanted under the skin below the collarbone. The box contains a lithium oxide battery (which lasts about five to nine years) and electrical components that analyze the heart's electrical activity. Connected to the pulse generator are one or more electrodes, which travel to the heart. When the ICD senses an abnormal heart rhythm, it administers a brief, intense electrical shock to the heart, correcting the abnormal rhythm. Many people say that the shock feels like being punched in the chest, although the amount of discomfort varies.
In addition to "zapping" the heart back to a normal rhythm, ICDs also can generate milder electrical impulses. These impulses can artificially regulate or "pace" the heartbeat if the heart develops other types of arrhythmias. For example, ICD impulses can help to slow down the heart when a person has ventricular tachycardia, an abnormally fast heartbeat. ICD impulses also can speed up the heartbeat in cases of bradycardia, an abnormally slow heartbeat.
An ICD also keeps a record of its actions. This record helps your doctor to monitor how often you have arrhythmias and how dangerous they are. It also allows your doctor to see how well the ICD is working.
ICDs must be checked periodically. Surgery isn't required. A special radio transmitter can receive information from the ICD. Also, ICDs can be reprogrammed to improve performance. Adjustments are made with a small, wandlike instrument held near the body.
To prevent an unexpected loss of power, ICDs have a built-in warning signal that tells the doctor when the battery is low. This signal appears several months before the battery expires. It can be detected in the doctor's office during a routine ICD checkup.
The first ICDs in the 1980s were fairly simple. They only treated ventricular fibrillation, and they had a relatively large pulse generator (about the size of a pack of cigarettes). Implanting them required major, open-heart surgery, followed by a long hospital stay. Newer ICDs have pulse generators that are approximately 1 inch (2.5 centimeters) in diameter and weigh less than 1.4 ounces (about 40 grams). Their pulse generators are inserted under the skin through a small incision, and the electrodes can be threaded through veins without surgically opening the chest. Because of these improvements, ICD implantation is now classified as minor surgery.
Over the past two decades, more than 200,000 ICDs have been implanted in people throughout the world. During this time, ICDs repeatedly have proven themselves as lifesaving devices. Most clinical studies confirm that people who have ICDs have a much lower risk of sudden death than comparable people who received other treatments. In people who have ICDs to correct ventricular fibrillation, the success rate is almost 100%.
ICDs are implanted in a hospital operating room or in a cardiac electrophysiology laboratory.
What It's Used For
Doctors use ICDs to prevent sudden death caused by certain types of heart arrhythmias. Your doctor may recommend an ICD in the following situations:
Your doctor will review your medical history and allergies. He or she will ask for a list of your current medications. If you are taking medicines to prevent blood clots (anticoagulants), your doctor will tell you when to stop taking these drugs. He or she also will provide instructions about when to stop eating and drinking before the procedure.
How It's Done
Before surgery, you will dress in a hospital gown and remove any jewelry and watches.
The most common location to place the ICD's pulse generator is below the left collarbone. The skin in this area will be shaved, cleaned and numbed with a local anesthetic. If you need more than a local anesthetic to make you feel comfortable, your doctor may use conscious sedation, a form of anesthesia that allows you to remain awake and pain-free during surgery.
A small incision will be made in the numbed area near your collarbone. Next, a small incision will be made in a vein under your collarbone. This vein will be used as the passageway for threading the ICD's electrode(s) into your heart. Some ICD models use one electrode, others use more.
The doctor will insert the electrode(s) into the vein and guide the electrode(s) into your heart. X-rays will confirm that the electrode(s) are positioned correctly. Wires from the electrode(s) will be connected to the pulse generator, which will then be nestled near your collarbone. Your doctor will test the ICD to make sure it is working correctly. To do this, the doctor will trigger cardiac arrhythmias on purpose, and then observe how the ICD responds. During this part of your surgery, you will receive general anesthesia to allow you to sleep through the ICD testing.
Once the doctor is sure that your ICD is working properly, the incision will be closed with stitches (sutures) or surgical staples. The entire procedure usually takes one to two hours.
After surgery, your medical team will monitor your condition closely. During this time, your doctor may use a handheld magnetic instrument to make programming adjustments in your ICD. For the next few days, you will be given antibiotics to help prevent infection. If all goes well, your hospital stay should be brief.
Before you leave the hospital, you will receive instructions about safely recuperating. In particular, you should avoid heavy lifting and other strenuous arm movements for a few weeks. These activities can dislodge or shift the position of the ICD electrodes inside your heart.
You will receive information about driving restrictions and participating in contact sports. Your doctor also will tell you how to reduce your risk of electromagnetic interference, which can affect the programming and performance of your ICD. This interference can come from antitheft devices, surveillance equipment, cell phones, welding equipment and hospital machinery, such as magnetic resonance imaging scanners.
Before you go home, your doctor will give you information about the make and model of your ICD. Print this information on an identification card and carry it with you. You also may want to wear a medical alert necklace or bracelet that identifies you as someone with an ICD.
Your doctor probably will schedule your first checkup for one to two weeks after surgery. At this visit, the doctor will inspect your incision and remove your sutures or staples. He or she also will check that your ICD is working correctly.
After your first follow-up visit, you probably will return for ICD checkups every three to six months. If you have no problems or complaints, this follow-up schedule should continue for the next three to four years. During follow-up visits, your doctor will check your ICD's battery level, programming and electronic record. After three to four years, depending on your progress, checkups may be scheduled less often.
More than 99% of patients survive the ICD procedure, and fewer than 3% have complications related to surgery. Complications of ICD surgery can include:
Once the ICD is in place, there is also a long-term risk of:
When to Call a Professional
After your surgery, contact your doctor immediately if:
Seek emergency medical care immediately if you have an ICD and:
National Heart, Lung, and Blood Institute (NHLBI)
P.O. Box 30105
Bethesda, MD 20824-0105
American College of Cardiology
2400 N Street, NW
Washington, DC 20037
U.S. Food and Drug Administration (FDA)
5600 Fishers Ln.
Rockville, MD 20857-0001
Toll-Free: 1-888-INFO-FDA (1-888-463-6332)
Heart Rhythm Society
1400 K Street, NW
Washington, DC 20005 | 1 | 6 |
<urn:uuid:c1534c15-5d3c-4492-b204-dd8c73f98733> | |Australian Journal of Educational Technology
1989, 5(2), 132-160.
In this paper, the criteria for selecting modern learning technologies are discussed and it is suggested that four teaching/learning activities might form the basis for selection combined with a number of types of conceptual representations. The most important aspects for a designer are the match between learning task and its ability to be presented or manipulated by the learner using a decreasing range of information technologies.
Fifteenth century Europeans 'knew', that the sky was made of closed concentric crystal spheres, rotating around a central earth and carrying the stars and planets. That 'knowledge' structured everything they did and thought, because it told them the truth. Then Galileo's telescope changed the truth. (Burke, 1986, p.9)Over the past three years we have seen major changes in the information technologies. With the advent of the most recent computers such as the NeXT workstation, we are presented with a black box which enables words, numbers, visuals, sounds, dictionaries, thesauri, and external events to be controlled, manipulated and represented to the user in a variety of forms, often simultaneously, and also to other users linked into a network. Over this period, significant developments have also occurred in conceptualising research into the use of media in education and training. It is this relationship which forms the basis of this paper; the discussion will be divided into three main elements-technology, instructional design, and some ways of bridging the cultures and selecting modern media.
We live in a world where ideas and manipulations can be achieved simply with tools such as computers and computer-controlled robots, the challenge for instructional designers is to recognise the possibilities and employ technologies through which the learner can manipulate the ideas, concepts and even physical skills being taught. In the past where media have been selected for learning, the algorithms often focussed upon the simple identification of attributes, motion versus still, colour versus black and white, projected versus opaque, etc (see for example, Kemp, 1977 & Romiszowski, 1981). With the sophistication of todays learning technologies, these rather simple conceptions are no longer adequate. The choices are most often within one medium rather than between a variety of media forms. The classification schemes are difficult to use when you are looking at combinations of forms within the one lesson presentation. To achieve better use of information technologies the instructional designer needs more than a simplistic grasp of the possibilities of the technology.
The movement towards more integration of systems and technologies has provided an interesting environment for designers. It is becoming less necessary to learn about the diversity of different hardware systems as they start to adopt common user-interfaces and employ one or two formats for delivery. By way of a simple example, the new disk drives available with the latest Macintosh computers can read and write Apple II, Macintosh and IBM, high and low density formats - one drive suits all! Thus conceptualising anything in narrow hardware terms will not address the concepts to be learned and cognitive requirements of the task.
This approach has always been limited by the availability of the necessary equipment, but such a limited conception of technology should not be the driving force for developing instructional programs for the next decade. The cost of hardware is decreasing, and the number of elements required to form a useful workstation is also declining.
The workstation concept, which has grown with the advent of the word processor and the microcomputer, on which most are based, has enabled the presentation and manipulation of concepts in ways previously only possible with combination of media forms or more sophisticated computer systems. This power of manipulation and presentation of ideas has not gone unnoticed by such proponents for the use of technology in the teaching of mathematical ideas (Kaput, 1986; Papert, 1980; Pea, 1987). Foremost among these enthusiasts has been Seymour Papert, who generated some interesting challenges for educators with his book Mindstorms just over eight years ago. Since those first challenges, the technologies, which enable the manipulation and generation of ideas, have also developed. Four to five years ago the Macintosh burst onto the scene and provided the user with a graphic interface as a standard. The user was then able to manipulate concepts visually and more intuitively than had been available on mainframes or under the mnemonic operating systems of some personal computers.
The provision of these powerful tools has enabled concepts to be understood more completely and learned more efficiently. Understanding the integral calculus of LOGO can lead to complex mathematical ideas in an intuitive context well before the student has progressed to levels of formal operational thought. Dealing with pictorial representations has also enabled the designer to present complex concepts in forms that are seductively simple to the learner. Shapes can be stretched and distorted by manipulating a "mouse" attached to "handles" of the figure. The latest graphics drawing tools use tangential line "handles" to change curvature and create complex smooth figures.
Technologies, and particularly information technologies, are at a point where they can easily integrate a variety of components into one device. With increasing power of small systems there are also other trends which predict a greater integration of technologies and a corresponding reduction in the currently considered separate hardware/communications technologies. Nicholas Negroponte, Director of the Massachusetts Institute of Technology Media Lab, has described the situation as a series of overlapping circles. Using figure 1, he indicated to senior executives in the communications industries that their strategic planning for the future should take into account the convergence of technologies and that their products would increasingly become interchangeable and 'playable' on the one computer-based system.
Figure 1: Converging technology industries (Negroponte, in Brand, 1988)
An excellent example of where Negroponte's conception would lead is epitomised though recent developments in personal computers, such as Steve Job's next computer, where a number of information storage devices are combined, quite literally, into one black box. These developments can prove a boon to the designer in that more senses can be employed in the learning interaction between the learner and the technology. However, at the same time they raise instructional design challenges about the way the interaction should be developed. In studies of technology-as-hardware, student learning has not been enhanced by the hardware alone, other factors, in particular the design of the learning materials using the technology, have been more important (Clark, 1983, Johnson et al, 1988; Salomon, 1979).
During the 1970s and 1980s, numerous authors have written about the technology-as-process approach to curriculum design (Reiser, 1987; Percival and Ellington, 1988). In a recent summary, Percival and Ellington (1988) outline the changing major concerns of the approach as:
Within this context, any technology might be described as a mediator between the three human components of the interaction; the subject matter/ content expert, the instructional designer and the learner. Technology, on its own, is inanimate and lifeless; the human manipulation of the interaction creates the power of the technology for learning. The link between the original expert and the learner can be considered to be mediated through the attributes of the technology employed and the skills of the instructional designer (who incidentally may also be the teacher or instructor). The content organisation and the attributes of the technology the designer employs to present the ideas will help or hinder the learner's comprehension of them (Salomon, (1979). Learners, in turn, have their own individual understanding or conceptual sets which they apply to the presented materials to achieve mastery of the knowledge and information presented. Engelbart (1988) illustrated the concept, when he described the attributes of a hypermedia (note 1), environment (Figure 2), which augments human capabilities. His thesis is that most human capabilities are composites; any "example capability" can be thought of as a combination of the human-system and the tool-system capabilities. This process is possible, given the human skills and knowledge, to employ these systems. It is this last skill-the knowledge to employ-which is a major variable in technology adoption.
Figure 2: Extending the capabilities of the individual through technology (Engelbart, 1988)
In order to demonstrate how the instructional designer and the learner can use appropriate technology to improve skills, conceptual understanding and the process of communication of ideas, it becomes important to examine the current conceptions of how technology might be employed and what skills are required of both instructor and learner.
Many of those coming to terms with technology in higher education are representative of these groups. Greater emphasis is being placed on learner involvement in learning, and demands are being made for a broader knowledge base. Thus learners are being compelled to venture into areas which were once the realms of specialists. For example, work has been undertaken with interactive videodiscs (note 2) where students can explore databases of realistic situations in the security of the classroom, and the technology enables them to become involved and make decisions. These decisions can be about key issues, such as, chemical experimentation or future employment. This interaction can occur without fear of failure (Scriven and Adams (1988): Ambron & Hooper, 1988).
Word processing and literature searching are two common examples of increasing technology use as an extension of human capabilities. Traditionally, assignments were handwritten or an author employed a typist to create a respectable assignment presentation. The proliferation of word processors has changed that. Assignments must now be at least typewritten, preferably word processed, spellchecked and, in some instances, be presented with integrated illustrations and graphics laid out using a page layout program. Hard copies are not always required either. Some instructors request assignments to be submitted on disk, or in the case of distance education, assignments can be downloaded via a modem or placed on a bulletin board.
In the area of literature searching, the contents of the school or institution's library sufficed or, if not, a researcher made an appointment with the "on-line search" specialist librarian to conduct a (rather costly) literature search. The advent of databases on CD-ROM (note 3) has enabled a "do it yourself" approach. This easy and cheaper alternative is encouraging academia to incorporate a more comprehensive review of the literature in areas which were once the kingdom of the textbook. Realistically though, not everyone employs technology in achieving a goal and many teachers, while using a technology at a basic functional level, do not think in terms of its potential to assist human thought and concept development. (Office for Technology Assessment, 1988; Roblyer, et al, 1988).
From the work at the MIT media lab and the growing awareness of integrating technologies such as CD-ROM, CD-I (note 4), and DV-I (note 5), there are predictions that, not only will the future classroom be well equipped, but these systems will also allow home use at reasonable cost. The move over the next few years will be to publish and present knowledge in these technologies (see for example, Bitter, 1988; Hativa, 1986, Hedberg, 1989).
Information technology-based, teaching materials are often confined to the role of a sophisticated presentation devices. However, with existing applications software, there is the opportunity for the student to use applications software packages for knowledge generation as well as knowledge presentation. (See for example, Hedberg, 1988a).
Frustration and anxiety are a part of the daily life for many users of computerised information systems. They struggle to learn command language or menu selection systems that are supposed to help them to do their job. Some people encounter such serious cases of computer shock, terminal terror, or network neurosis that they avoid using computerised systems. These electronic-age maladies are growing more common; but help is on the way!While new and exciting aspects of information technology and its use are constantly being brought to the attention of the higher education community, the human-technology interface seems to have attracted attention in education only in recent years (e.g. Barrett and Hedberg, 1987; Shneiderman, 1987). This issue becomes more important when considered in the light of the problems faced by teachers as learners as they attempt to understand and use the technology as a tool. In summarising the state of technology adoption by teachers, the Office of Technology Assessment (1988) found that interactive technologies take more time and effort to learn than many other curricular innovations, and their use made teaching a bit tougher, at first. The choice of an appropriate technology for learning might focus on these issues, if more general use is to be made of the technology by teachers.
...the diverse use of computers in homes, offices, factories, hospitals, electric power control centers, hotels, banks, and so on is stimulating widespread interest in human factors issues. Human engineering, which is seen as the paint put on the end of a project, is now understood to be the steel frame on which the structure is built (Shneiderman, p. v, 1987).
Several participants had never used a computer before. Not only was the idea of a data storage on a small disc unfamiliar to them, so too was the means of accessing the disc. Some of the most prohibiting factors were the necessity of knowing specific identifying words, the need to press specific keys for the generation of particular information, and the methods of correcting errors in typing or input. At a deeper level, several participants were willing to accept the first instance of information which appeared on the screen, without checking for details or the appropriateness of the response. They firmly believed that the computer could not err - (even if the error was in human input), and therefore the information must be correct. Beside the need for keyboard skills, which created a barrier to effective use of the technology, many participants concentrated more upon following correct procedures, rather than the information being presented. Optimistic assumptions about teachers' ability to use technology frequently cause problems with the instructional strategies in which the technology is employed.
A related problem has been that some current application software appears to the novice user to have been written by those "in the know". Although most applications programs incorporate "help" mechanisms (approximately twenty-two screens of help were found in one database program), these resources are beyond the grasp of the novice user or one unfamiliar with the "language" of how to get to, and be able to read the "Help" file.
The most important catchcry of the computer-based education enthusiasts has been learner control. However, while there are numerous studies indicating its importance for motivation and efficient learning, its actual implementation in courseware is often only lip service. Learners, to take control over their learning experience with technology, still need to understand how the software they are using works and where they stand in their performance so that they can make informed decisions about where to venture next. The current enthusiasm for Hypercard (note 6) as a medium for exploration is based on the ability of the keen learner to choose a path and enjoy the options. At any moment the student can review where they have been and jump directly to a particular screen (through the "recent" review function); this degree of flexibility and graphic summary of progress has either not been possible before in courseware or simply too difficult to include. While its impact has not been fully explored, the opportunity for a "hyperview" of their learning sequence does enable greater control of what and how some things can be learned. An extensive summary of the hypermedia options becoming available has been provided by Ambron and Hooper (1988), and this challenges the developers of computer-based software to conceive of different formulations of instructional sequences in place of the routine drill and practice, tutorial, simulation, and problem solving strategies of the past.
There is a growing realisation that the forms of software presentation can now adapt to the modes of representation and learning styles preferred by individual learners. Visual learners can convert data tables into graphical forms, haptic learners can use robotics to see, touch and feel the meanings of computer commands and their effect on an object. The link between formal logic structure and physical representation can be explored in terms of a functional relationship. In the mathematical curriculum it is possible, with software such Geometric Supposer, Function Builder, to investigate and manipulate ideas in a one-to-one relationship. A change in the mathematical function will be shown by a change in its graphical representation, and modifying the graphical representation will produce a corresponding change in the function. On a more concrete level, the work by Papert and his colleagues with Lego LOGO also enables this link to be investigated (Papert in Brand, 1988).
The use of the technology is not purely a function of the availability of equipment, it is also a problem of understanding the technology as a tool for thinking. While it might never be expected that all teachers will use the technology as a tool for everyday knowledge generation and presentation, special groups such as mathematics and science teachers do have some conceptual advantages in using the technology from a discipline point of view. However, that alone is not sufficient. In describing the effectiveness of an interactive videodisc mathematics lesson, Carnine, et al (1987) emphasised the importance of instructional design in the materials which made them more effective than teachers working by themselves with a computer. These authors emphasised that instructional design skills were equally as important as the provision of the curriculum materials.
Even with less sophisticated materials, such as the production of class handouts, there are new skills involved in the preparation of printed curriculum materials using the skills of typist/graphics composer/page layout compositor. The microcomputer has required a re-working of tasks and roles. The availability and accessibility of this technology has enabled individuals to work directly with the material which is going to be used in the teaching process. The immediacy and closeness with which individual authors can work on their material has meant that high quality materials can be presented quickly and designed to improve learning and increase their effectiveness.
Taking account of students prior conceptions. One of the key elements in the materials designed was the deliberate linking of previous learning by means of the technology to scientific method and theory, so that the materials created an environment in which new data and phenomena could be transformed from naive understanding into more lasting and sophisticated ideas. In many projects technology enabled students to work with their own levels of understanding and with representations of knowledge with which they were comfortable.
Integrating directed instruction and inquiry learning. One of the concerns with instructional strategy led the team to apply a different approach to those previously advocated by the proponents of microworlds (Papers in Brand, 1988). The mix in instructional strategy was to overcome the problems of extremely open-ended environments which, they believed, rarely led to students reconstructing concepts that mathematicians had taken centuries to devise. By designing materials which employed technology in a hybrid of direct instruction and inquiry learning, teachers helped students develop and test their own ideas. Commercially available software was employed in this type of activity.
Teaching how knowledge is generated. One ETC project, the Nature of Science Project, used a variety of resources to produce an understanding of scientific thinking within the context of specific phenomenon. An interactive videodisc was used to investigate several "black box" problems. With this technology a series of conjectures could be investigated without expensive experimental equipment and the results of each manipulation of variables could be easily demonstrated. When this introduction to the experimental method was combined with real experimentation, students moved away from narrow beliefs about science to understand that it originates in the mind of the scientist and that it involves persistent examination of ideas.
These concepts about teachers and teaching strategies are not unique to this series of projects. The work at the MIT Media Lab and their associated elementary school has created similar environments for learning, with success for learners at different levels of ability. The outcome of all such activity has been to re-examine the role the teacher and technology can play, no longer can the teacher simply relinquish his/her presentation to an audiovisual presentation device, the teacher must take an active role in supporting the inquiry.
As to the other aspect of insufficient curriculum software, many writers have promoted the use of templates for applications software (Hedberg 1988a). What is more important is the structure of the exercise and the ability of the student to change elements in the model. When the choice of appropriate hardware is linked with potential software, then great advances can be made at very little cost and with little time spent in software development. Hypercard and Linkway are two programs which enable users (whether they be teachers or students) to design a series of experiences which can present ideas and manipulate them cheaply with the minimum of programming effort. Further, as it is possible to exchange software produced on these systems, the cost of running a range of curriculum materials is the cost of the disk. Recently, Club Mac released a CD-ROM of all its software. Only one would be needed at each school, as most material is in the public domain. Further, simple authoring software is becoming available in this format, allowing teachers or typists to input tests and experiences which can be quickly modified.
Compatibility issues. Over the years most educational systems, whether they be State Education Departments, universities or individual schools, have sought to simplify the process of compatibility by insisting on one or two machines. This is becoming less and less of a major problem. With bulletin boards it is a simple matter of copying files from one computer to the other. Often software is written in languages which enable transportability of software such as "C". This trend, when matched with the growing capability of reading and writing magnetic media from any of the three main systems (IBM, Apple II, or Macintosh) and the links between major mainframe and micro manufacturers (e.g. Digital and Apple), would indicate that there should be little real concern for constraining unified hardware requirements.
Laurillard (1987) has spoken of the development of multifaceted design models and Hedberg (1988a) has mentioned the use of templates as simple ways that link the use of technology to regular tools which are in common (preferably daily) use by the learner. Such a concept needs first to examine the reasons for using technology in the teaching/learning process. For example, the use of the simple device of a spreadsheet with a prepared mathematical model allows at least three levels of processing. First, a learner may type their own numbers into a prepared pro forma, the package will calculate according to the prepared algorithms and changes in different elements will show a relationship between inputs and results. Changing the inputs allows the learner to model different results based on the input assumptions. A second level might involve the translation of the numbers into another form of representation such as a chart. This second level may have been already prepared by the instructor and the links simply updated as the learner changes the numbers in their first pro forma, or the reamer might use the links between spreadsheet and charting routines to clarify or further investigate relationships (especially if they are a visual learner). A third level would enable the leaner to change the underlying assumptions on which the analysis is based - the learner might decide to investigate the algorithms devised for the relationships between inputs and results. By changing the formulae, the learner can extend beyond the interaction designed by the subject matter expert and the instructional designer. At both the second and third levels, the learner is manipulating the technology to generate knowledge rather than simply to watch its presentation. Thus the technology allows the student to extend his or her understanding beyond the original intents.
Recent work has tried to reassess the functions of technology in terms of the type of tools required for different types of learning activities. Consider Table 1, where four key activities for teaching and learning are described - knowledge generation, knowledge presentation, knowledge communication and information management. The instructional designer needs first, to focus on the underlying learning activity, then secondly, define a link between the concept presentation and how the students must work with the information to produce their own understanding of the ideas and issues. Foremost in this design concept is the idea of allowing the student to manipulate the concepts directly, and not to have the presentation totally circumscribed by the designer, who might decide to present information in a single conceptual model.
Thus the model presented here is concerned with two basic functions of a technology for learning-teaching/learning activity and form of knowledge representation. Additionally, because learning may occur at a time or distance remote from the tutor, knowledge must also be communicated with others. The communication of results, questions and corrections between tutor and learner, or amongst students, is of particular interest, and technology can influence and assist the quality of this interaction. As mentioned previously, bulletin board software can be used to generate insights beyond the prepared brief of the designed materials.
The last teaching/learning activity illustrated in the model indicates the important management function involved in all materials to be used in learning. Personal productivity software, when linked together, can provide a useful organising force for tutor, designer and student, especially for time management or idea generation.
Each of the four teaching/learning activities can use technology in a variety of forms. Each different form is appropriate or needed for the ideas or concepts to be understood by the learner. Using current information technology we are no longer constrained to the simple verbal form. Mixtures of sound, music, words, pictures or moving sequences can be integrated into each teaching/learning activity. With computer control of external devices, it is possible to manipulate objects in three dimensional space and to link them with graphical or numerical representations.
Richey (1986) emphasised that instructional design has been distanced from teachers when she opened her book:
Planning instructional programs and materials has been separated from the jobs of those who actually deliver the instruction in a growing number of situations .... The dichotomy between instruction and instructional design ... is ... influenced by different theoretical orientations and different practice histories (Richey, 1986, p.2)
Producing materials can occur through enthusiastic teachers, through teacher educators or as demonstrated by the ETC example at Harvard, through a collaborative approach of both. Models of instructional) design abound in the literature, and most of the recent attempts to link technology with practice have simplified the process and reduced the complexity of previous behavioural prescriptions. Emphasis is upon structuring the curriculum so that it can be represented by simple "epitomes" (see Reigeluth, elaboration theory, 1987) and graphical links between concepts and motivating environments (Reiser, 1987). Many organisations who must manage the production of learning resources operate on the just-in-time method for their generation. The cost of inventories, the complexity of multi-media storage, and the deterioration of electronic media with poor storage and time has meant that many curriculum packages are produced on-demand. These factors do not necessarily require a centralised production source. Most reasonably large organisations already possess the infrastructure to produce materials without the need for further bureaucratic centralisation. In fact, the notion is generally antagonistic to trends of development in information technology and the way in which people adapt and implement new technology. However, there is definite need to assist with the identification of good products which are often hidden in a growing mountain of alternatives. Instant access to information about and evaluations of packages, together with cheap copies of their associated documentation, can be made available through public bulletin boards and/or distributed through CD-ROM or other large database storing technology.
Propinquity is also a major factor in producing a product. The fact that the subject matter expertise, the design expertise and a computer are frequently within walking distance of each other will help the production of materials in ways not envisaged in the traditional bureaucracies of curriculum development centres. However, it is very unlikely that any economies can be achieved without some coordinated curriculum development of quality and with an eye to appropriate technology for the learning task.
It is difficult to predict future hardware formats and the most appropriate technology in which to develop resources. At the moment, the push is to use pre-recorded formats (usually optically encoded), such as CD-ROM, although, the recently released NeXT computer uses an optical read-write system holding about 250 megabytes. WORM technology exists to enable writing data once on optical media and then being able to read many times. Entire manufacturing plants are run on WORM technology. No paper is generated; everything is added and changed in centralised filing systems. However, most current projects have considered interactive videodisc which requires less change to existing systems of recording and distribution. Publishing companies are considering CD-I (digital, interactive, multimedia systems) as a potential device for distribution of interactive training, reference books, albums, home learning and do-it-yourself learning, either with or without the computer (in the latter case the technology would be built into the system). Some commercial companies promise DV-I with up to 75 minutes of full screen video and 3D motion pictures (see discussions in Bitter, 1988; Scriven and Adams, 1988). Whatever the final hardware choice, the growing trend toward file conversion and similar magnetic media formats will probably continue for the next few years. This development alone will enable exchange of software between the major systems.
I - drill and practice/tutorialRecent educational software has provided instruction for both student and teacher, and it supports activities which are seen as important by the instructor (see for example, Geometric Supposer [Schwartz & Yerushalmy, 1985] and The Voyage of the Mini [Gibbon in Ambron & Hooper, 1988]).
II - simulation and new forms of representation.
The design of an "intelligent" software does not necessarily mean the move to more complex artificial intelligence systems; it could mean simply using the ideas of good game design which engages students by providing fantasy, creativity and challenge (Malone, 1981). Simulations should be open-ended and allow students to generate knowledge rather than manipulate the parameters (Hedberg,1989b; Goldenberg, 1988). Extending the range of experience through the use of peripherals such as CD-ROM and videodisc should be seen as commonplace rather than special events. The work undertaken with the only Australian videodisc system produced specifically for schools (Steele, 1988) has demonstrated that the systems can work. However, it does require the vision of educational departments, intelligent interactive media design, and a small additional investment in a distribution technology which is more robust and of higher quality than anything currently available.
The move from traditional conceptions of what educational software might present with hypermedia involves greater control for teachers and modifiability of the software (Hativa, 1986). Early concepts of software saw instructional strategies being clearly defined and fixed within each software package. Recent systems have also included artificial intelligence components which enable strategies to be more closely matched to the learning style (Criswell, 1989). Even without artificial intelligence components, the move into Hypertalk language structures has enabled greater flexibility in design and the use of environments. Certainly, the addition of interactive videodisc and CD-ROM is a simple task and one that extends the capabilities of the software design (see Fielded and Steele, 1988, Ambron and Hooper, 1988, Hedberg, 1985).
Throughout the preceding discussion, there have been a number of examples which indicate that media can provide a unique and useful contribution to a concept presentation. Of particular interest are its abilities such as linking multiple representations of a concept and linking physical demonstrations through robotics or hypermedia to their theoretical counterparts.
Simplistic software design or thoughtless use of computer graphing in classrooms may further obscure some of what we already find difficult to teach. On the other hand, thoughtful design and the use of graphing software presents new opportunities to focus on challenging and important mathematical issues that were always important to our students but were never accessible before. (Goldenberg, 1988, p. 135)Many of the popular descriptions from the work of Seymour Papert have included descriptions where one student suddenly became the "expert" for some time and, for one brief shining moment, was looked up to by their fellow students (Papert, 1980; Papert in Brand, 1988). The environment provided by Lego LOGO and some multimedia software packages can provide for the social aspects of learning.
Improved student performance was experienced in a videodisc based lesson on fractions. Carnine et al (1987) put this effect down to a number of factors, especially, the carefully selected curriculum and the teaching strategies which fostered high levels of student engagement and success. The teaching strategies employed included a concern for example selection, an explicit teaching strategy and discrimination practice to reinforce the concepts. Carnine et al claimed that the instructional design of the videodisc was critical in the development of improved student learning. All too often they felt that the use of inappropriate elements of design in poorly conceived materials interfered with or contradicted the intent of the curriculum. Importantly in their study, they were concerned for the use of the technology with a group based on the research summarised by Bangert, Kulik and Kulik (1983) which found there were often stronger effects for group learning than when the same materials were used individually.
Representational correspondence can also be used to effect when dealing with difficult-to-grasp concepts such as the notion of a variable. With well designed software it is possible to create new concepts using both abstract and concrete models (Goldenberg, 1988, Janvier, 1987).
There are a number of unresolved questions about the use of windows in educational software, especially how the user comprehends how different windows relate and how consistent is the interpretation. Consider for example, overlapping windows versus tile windows (non overlapping segments of one screen) - often it is easier to understand what is happening if a number of things which are happening simultaneously occur always in the same part of the screen. This means a more expensive screen system and certainly a higher resolution system. Many of these issues have not been investigated with non-expert audiences, the research on human factors to date being largely related to business and military applications.
To improve the learning experience, software that enables the learner to have control over more than parameters is to be preferred. Students need to be able to control the underlying function as well as the parameters which might be the subject of a constrained set of experiences (Goldenberg, 1988; Kulik & Bangert-Downs, 1983-1984). A few years ago, the Curriculum Development Centre in Canberra was interested in a small package which simulated a fishing village economy of a Pacific island. The materials were designed to include a number of graphics, but the interaction was purely setting the values of three parameters and watching the wealth of the community and the size of the fishing fleet change as the parameters varied. Students were not able to examine the functions on which these relationships depended, a short-sighted design. It would have been just as easy to use a spreadsheet template and allow the students to change values, as well as the functions, and view the outcomes in a graphical or numerical form. This approach is possible using commercial spreadsheet programs at a fraction of the cost of distributing specially coded software written in BASIC and only running on the one computer. Thus designing a spreadsheet template would have taken less time, and could be more easily adapted for different packages and computers.
Other presentation factors in computer-based material, such as the speed of execution, may hide the development of the idea. The speed with which an object is drawn or an equation solved has often led to an emphasis on the Gestalt rather than the incremental development of the idea (Goldenberg, 1988, Schoenfeld, 1987). Some software packages have had to slow down the presentation of information so that the developmental steps can be shown.
Scale, another difficult concept, can be sometimes confused in poorly executed software. It can be difficult for some students to determine the difference between a change in scale, and "zooming" into a section of an object, where the scale is not changed, only its representation on the screen. This problem can be further complicated by multiple windows as mentioned above. Changes in scale are easily achieved with computers, there can be confusion between zooming-in on a scale and actually changing the scale. (ETC, 1988, Goldenberg, 1988). Scale can also be complicated with a simple change of screen size. With some computer systems, the same representations on different screen sizes will appear different sizes, and there is no continuity of experience. Some computers enable a fixed-size screen representation leading to a consistency in scale representation across different size screens.
One of the interesting concepts that computers enable learners to manipulate is the idea of the finite versus the infinite. With the technology, even the best representation is still composed of finite pixels, and there are always jumps between elements.
Consider the restructuring of knowledge which is required to develop an electronic encyclopedia (Kreitzberg & Shneiderman, 1988). The hypermedia approach to materials design that the new technology allows creates some interesting problems for someone who previously "thumbed through" a book. Electronic media require multiple indexes to point to the information. The student cannot easily browse in the traditional sense. Browsing is possible in that several of the programs now available allow a browse function which rapidly scans each "card" in a database, and the user can click to stop the process at any time. The technique is really limited to looking at some sample items and small databases, but some users not at ease with the technology have been known to sit and watch them all in order to find just one relevant item! Students require multiple point of access and tolerance of spelling mistakes to find appropriate information. The problems of information retrieval are not insignificant, but the storage cost of multiple and idiosyncratic indexes is not beyond possibility with CD-ROM and other technologies.
If the instructional designers are excited, then there is the chance some of that excitement and creative energy will be communicated to those who learn from the materials they design.
Bangert-Downs, R. L., Kulik, J. A., & Kulik, C. L-C. (1985). Effectiveness of computer-based education in secondary schools. Journal of Computer-Based Instruction, 12(3), 59-68.
Barrett, J. and Hedberg, J. G. (Eds.) (1987). Using Computers Intelligently in Tertiary Education. Sydney: ASCILITE.
Bitter, G. G. (1988). CD-ROM Technology and the classroom of the future. Computers in the Schools, 5(1/2), 23-34.
Brand, S. (1988). The media lab. New York: Penguin.
Bright, G. W. (1987). Computers for diagnosis and prescription in mathematics. Focus on Learning Problems in Mathematics, 9(2), 29-41.
Bright, G. W. (1989a). Teaching mathematics with technology: Logo and geometry. Arithmetic Teacher, 36(5), January, 32-34.
Bright, G. W. (1989b). Teaching mathematics with technology: Numerical relationships. Arithmetic Teacher, 36(6), February, 56-58.
Brod, C. (1984). Technostress: The human cost of the computer revolution. Reading, MA: Addison-Wesley.
Burke, J. (1986). The day the universe changed. Boston: Little, Brown and Company.
Carnine, D., Engleman, S., Hofmeister, A., & Kelly, B. (1987). Videodisc instruction in fractions. Focus on Learning Problems in Mathematics, 9(1), 31-52.
Clark, C. M. (1988). Asking the right questions about teacher preparation: Contributions of research on teacher thinking. Educational Researcher, 17(2), 5-12.
Clark, R. E. (1983). Reconsidering research on learning from media. Review of Educational Research, 53(4), 445-459. (Citation included in Kerr re relative effectiveness of media based education)
Clark, R. E. (1985). Confounding in educational computing research. Journal of Educational Computing Research, 1(2),137-148. (Citation included in Bitter re effectiveness of CBT)
Criswell, E. (1989). The design of computer-based instruction. New York: Macmillan.
Educational Technology Center, (1988). Making Sense of the Future: A position paper on the role of technology in Science, Mathematics and Computer Education. Cambridge, MA: Harvard Graduate School of Education.
Engelbart, D. C. (1988). The augmentation system framework. In S. Ambron & K. Hooper, (Eds.). Interactive Multimedia: Visions of multimedia for developers, educators, and information providers. Redmond, WA: Microsoft Press.
Fielden, K. & Steele, J. (1988). Hypercard and interactive video. In J. Steele & J. G. Hedberg (Eds), EdTech'88: Designing for learning in industry and education. Belconnen, ACT: AJET Publications. pp.43-50. http://cleo.murdoch.edu.au/gen/aset/confs/edtech88/fielden.html
Goldenberg, E. P. (1988). Mathematics, metaphors and human factors: Mathematical, technical and pedagogical challenges in the educational use of graphical representation of functions. Journal of Mathematical Behaviour, 7(2),135-173.
Hativa, N. (1986). The microcomputer as a classroom audiovisual device: The concept, and prospects for adoption. Computer Education, 10(3), 359-367.
Hedberg, J. G. (1985). Designing interactive videodisc materials. Australian Journal of Educational Technology, 1(2), 24-31. http://www.ascilite.org.au/ajet/ajet1/hedberg2.html
Hedberg, J. G. (1988a). Technology, Continuing Education and Open Learning or Technology 1 - Bureaucracy 0. In J. Steele, and J. G. Hedberg (Eds.), Designing for Learning in Industry and Education. Canberra: Australian Society for Educational Technology, pp90-94. http://cleo.murdoch.edu.au/gen/aset/confs/edtech88/hedberg.html
Hedberg, J. G. (1988b). Designing Ask the Workers...: Teams and conceptualisation. In J. Steele (Ed.) Ask the Workers...: Evaluation. Sydney: Australian Caption Centre. pp17-35.
Hedberg, J. G. (1989a). CD-ROM: Expanding and shrinking resource-based learning. Australian Journal of Educational Technology, 5(1), 56-75. http://www.ascilite.org.au/ajet/ajet5/hedberg1.html
Hedberg, J.G. (1989b). The relationship between technology and Mathematics Education: Implications for Teacher Education. In Department of Employment, Education and Training, Discipline Review of Teacher Education in Mathematics and Science. Vol 3. Canberra: Australian Government Publishing Service, pp103-137.
Hedberg, J. G. and McNamara, S. E. (1985). Matching Feedback and Cognitive Style in Visual CAI Tasks. Paper presented to the Annual Conference of the American Educational Research Association, Chicago, May.
Hedberg, J. G. and McNamara, S. E. (1989). The Human-Technology Interface: Designing for distance and open learning. Educational Media International, 26(2), 73-81.
Jackson, P. W. (1986). The practice of teaching. New York: Teachers' College Press.
Johnson, D. L., Maddux, C. D. & O'Hair, M. M. (1988). Are we making progress? An interview with Judah L Schwartz of ETC. Computers in the Schools, 5(1 / 2), 5-22.
Johnson, J. L. (1987). Microcomputers and secondary school mathematics: A new potential. Focus on Learning Problems in Mathematics, 9(2), 5-17.
Kaiser, B. (1988). Explorations with tessellating polygons. Arithmetic Teacher, 36(4), December, 19-24.
Kaput, J. J. (1986). Information technology and mathematics: Opening new representational windows. Journal of Mathematical Behaviour, 5(2), 187-207.
Kaput, J. J. (1987). Translational processes in mathematics education. In C. Janvier, (Ed.), Froblems of Representation in the Teaching and Learning of Mathematics. Hillsdale, NJ: Lawrence Erlbaum Associates. pp19-26.
Kemp, J. E. (1977). Instructional Design: A Plan for unit and course development. (2nd ed.) Belmont, CA: Fearon-Pitman.
Kerr, S. T. (1989). Teachers and technology: An appropriate model to link research with practice. Paper presented to the Annual Conference of the Association for Educational Communications and Technology, Dallas, Tx, February 1st to 5th.
Kreitzberg, C. B. & Shneiderman, B. (1988). Restructuring knowledge for an electronic encyclopedia. Paper presented to the International Ergonomics Association, 10th Congress, Sydney, August 1st to 5th.
Kulik, J. A. & Bangert-Downs, R. L. (1983-1984). Effectiveness of technology in pre college maths and science teaching. Journal of Educational Technology Systems, 12(2), 137-158.
Laurillard, D. (1987). Interactive Media: Working methods and practical applications. London: John Wiley.
Nation's future depends on reform of mathematics education. (1989, February 8th). Report on Education Research, pp. 3-4.
Office of Technology Assessment. (1988). Power on! New tools for teaching and learning. Washington, DC: US Government Printing Office.
Papert, S. (1980). Mindstorms: Children, computers and powerful ideas. New York: Basic Books.
Pea, R. (1987). Cognitive Technologies for mathematics education. In A. H. Schoenfeld, (Ed.). Cognitive Science and Mathematics Education. Hillsdale, NJ: Lawrence Erlbaum Associates. pp89-122.
Pea, R., Soloway, E. & Spohrer, J. C. (1987). The buggy path to the development of programming expertise. Focus on Learning Problems in Mathematics, 9(1), 5-30.
Percival, F. & Ellington, H. (1988). A Handbook of Educational Technology. 2nd. ed. London: Kogan Page.
Reiser, R. A. (1987). Instructional technology: A history. In R. M. Gagne (Ed.), Educational technology: Foundations. Hillsdale, NJ: Lawrence Erlbaum. pp11-48.
Richey, R. (1986). The theoretical and conceptual bases of instructional design. New York: Kogan Page.
Roblyer, M. D., Castine, W. H. & King, F. J. (1988). Assessing the impact of computer-based instruction: A review of recent research. Computers in the Schools, 5(3/4), 11-149.
Romiszowski,A. J. (1981). Designing lnstructional Systems. London: Kogan Page.
Salomon, G. (1979). Interaction of media, cognition, and learning. San Francisco: Jossey-Bass.
Schoenfeld, A. H. (Ed.) (1987). Cognitive science and mathematics education. Hillsdale, NJ: Lawrence Erlbaum.
Schwartz, J. & Yerushalmy, M. (1985). The geometric supposers. Pleasantville, NY: Sunburst Communications.
Scriven, M. & Adams, K. (1988). Evaluation: The educational potentialities of videodisc. In J. Steele (Ed.) Ask the Workers...: Evaluation. Sydney: Australian Caption Centre. pp 51-97.
Shneiderman, B. (1982). Fighting for the user. Bulletin of the American Society for Information Science, 9(2), 27-29.
Shneiderman, B. (1987) Designing the User lnterface: Strategies for Effective Human-Computer Interaction. Reading, MA: Addison Wesley.
Steele, J. & Hedberg, J. G. (Eds.) (1988). Designing for Learning in Industry and Education. Belconnen, ACT: AJET Publications. http://cleo.murdoch.edu.au/aset/confs/edtech88/edtech88_contents.html
Steiglitz, E. L. & Costa, C. H. (1988). A statewide teacher training program's impact on computer usage in the schools. Computers in the Schools, 5(1 /2), 91-98.
Trollip, S. R. & Alessi, S. M. (1988). Incorporating computers effectively in classrooms. Journal of Research on Computing in Education, 21(1), 70-81.
|Author: John Hedberg was asked to prepare a paper on technology and learning
Mathematics and Science for the recently completed Discipline
Enquiry. This paper is a refocussing of the ideas to the general
problems of selecting media for instructional tasks. He can be
contacted at the Professional Development Centre, University of
NSW, PO Box 1, Kensington NSW 2033.
Please cite as: Hedberg, J. G. (1989). Rethinking the selection of learning technologies. Australian Journal of Educational Technology, 5(2), 132-160. http://www.ascilite.org.au/ajet/ajet5/hedberg2.html | 1 | 13 |
<urn:uuid:403c2f57-1557-4aa3-84b1-c5cb9b670f2d> | Why not in my language?
You can change this ;)
How does Dagri calculate?
First an example: 12.340 € - is that 12340,00 € or 12,34 €? Your answer may depend on your location; But the correct answer is: Both are correct. And both are the same as 12,340 €. But nowhere in the world is our "the same result" (12,340 €) the same as our 12.340 € from the beginning.
Number formats, decimal signs and thousand signs differ all over the world. You can interpret them in one way, but you cannot convert them always back again. The well known solution shipped with other software is to force you to define what number format a column has to contain. And that's annoying to work with.
Dagri instead calculates two results: If a number isn't definite, one result assumes that a dot is the decimal sign, and the other assumes that a comma is the decimal sign. So if all your inputs can be interpretet in a unique way, both results will be the same, and Dagri offers you just one result. But if both results differ because numbers aren't definite, you'll get both results.
You can write "Price 12,34€" in a cell, and Dagri is able to detect the math value of 12.34 - and calculate with it. You can type in the next cell "12,345.6", and Dagri will detect the math value of 12345.6; So Dagri can calculate with mixed number formatings.
A invalid formated number will be ignored. For example "12.34,56" cannot be interpreted as dot or as comma decimal signed number - instead it is a simple string that exists of the same characters as a number does.
And if you type in "my 1st number: 2", Dagri will detect the 1 and will ignore the 2; Only the first valid number of a each cell will be used to calculate.
Where's the print button?
First the bad news: The used toolkit has no print support, because that's no fun on beeing completely platform independent. And Dagri is.
The good news is that printing means "I want to see my data on another medium". Named correctly you want to export data. And this is something we can do: Export your data to HTML and print out the HTML file. (Maybe there will be a direct transmission of a HTML export to your webbrowser which opens directly a print preview - so that would be equal to a print button.)
Can you add a SQL export?
The DAG file format is already a SQL database (SQLite). But if such an export would really help you, you can get your dump with this shell command:
sqlite3 file.dag .dump > dumpfile.sql
Of course you need the binary "sqlite3" therefore. The dump files table layout and its content correspond to the DAG file format described on this page:
Is there a limitation of rows and/or columns?
Yes, your computer. The more cells you create, the more resources it will take. Grids with 1.000 cells aren't a problem. 5.000 might be usable, too. But more will need more patience or faster computers. Or another software from another author ...
Dagri is not designed for large table layouts: You have to add rows and columns manually, so thousands of them aren't fun anyway.
In other spreadsheets I can ...
Dagri is not a spreadsheet, and Dagri never will be a spreadsheet. The intention of programming Dagri was not to create the next spreadsheet (why should I?). Instead Dagri solves the authors demands - which none spreadsheet can handle.
If you want or need a speadsheet, use a spreadsheet - and if you expect a spreadsheet, you'll be disappointed (and even won't get a clue what this is all about).
What range of functions is planed?
Well, that's easy: None. There is no plan. And Dagri already fits my demands to 100 percent.
Unfortunately I have to earn money for my day life, and there are many more fun projects wresting around my spare time, too. So developing Dagri has no focus; If I have time left, an idea and coding in my mind it will go on.
No binary for Apple Mac OS X?
I give a damn about that company. | 1 | 2 |
<urn:uuid:b320392e-bb22-4cfa-9ba7-37288761866f> | Since a PC has a screen and keyboard (as does a terminal) but also has much more computing power, it's easy to use some of this computing power to make the PC computer behave like a text terminal. This is one type of terminal emulation. Another type of terminal emulation is where you set up a real terminal to emulate another brand/model of terminal. To do this you select the emulation you want (called "personality" in Wyse jargon) from the terminal's set-up menu. This section is about the first type of emulation: emulating a terminal on a PC.
In emulation, one of the serial ports of the computer will be used to connect the emulated terminal to another computer, either with a direct cable connection from serial port to serial port, or via a modem. Emulation provides more that just a terminal since the PC doing the emulation can also do other tasks at the same time it's emulating a terminal. For example, kermit or zmodem may be run on the PC to enable transfer of files over the serial line (and possibly over the phone line via a modem) to the other computer that you are connected to. The emulation needs only to be run on one of the virtual consoles of the PC, leaving the other virtual consoles available for using the PC in command-line-interface.
Much emulation software is available for use under the MS Windows OS. See Make a non-Linux PC a terminal This can be used to connect a Windows PC to a Linux PC (as a Text-Terminal). Most Linux free software can only emulate a VT100, VT102, or VT100/ANSI. If you find out about any others, let me know. Since most PC's have color monitors while VT100 and VT102 were designed for a monochrome monitor, the emulation usually adds color capabilities (including a choice of colors). Sometimes the emulation is not 100% perfect but this usually causes few problems. For using a Mac computer to emulate a terminal see the mini-howto: Mac-Terminal.
Some have erroneously thought that they could create an emulator at a Linux console (monitor) by setting the environment variable TERM to the type of terminal they would like to emulate. This does not work. The value of TERM only tells an application program what terminal you are using. This way it doesn't need to interactively ask you this question. If you're at a Linux PC monitor (command line interface) it's a terminal of type "Linux" and your can't change this. So you must set TERM to "Linux".
If you set it to something else you are fibbing to application programs. As a result they will incorrectly interpret certain escape sequences from the console resulting in a corrupted interface. Since the Linux console behaves almost like a vt100 terminal, it could still work almost OK if you falsely claimed it was a vt100 (or some other terminal which is something like a vt100). It may seeming work OK most of the time but once in a while will make a mistake when editing or the like.
Dialing programs for making a PPP connection to the Internet don't normally include any terminal emulation. But some other modem dialing programs (such as minicom or seyon) do. Using them, one may (for example) dial up some public libraries to use their catalogs and indexes, (or even read magazine articles). They are also useful for testing modems. Seyon is only for use with X Window and can emulate Tektronix 4014 terminals.
The communication program Kermit doesn't do terminal emulation as it is merely a semi-transparent pipe between whatever terminal you are on and the remote site you are connected to. Thus if you use kermit on a Linux PC the terminal type will be "Linux". If you have a Wyse60 connected to your PC and run kermit on that, you will appear as a Wyse60 to the remote computer (which may not be able to handle Wyse60 terminals). Minicom emulates a VT102 and if you use it on Wyse60 terminal VT102 escape sequences coming into your computer's serial port from a remote computer will get translated to the Wyse escape sequences before going out another serial port to your Wyse60 terminal. Kermit can't do this sort of thing.
Emulators exist under DOS such as
just as well. The terminal emulated is often the old VT100, VT102, or
ANSI (like VT100).
Xterm (or uxterm which is like xterm except it supports unicode)
may be run under X Window. They can emulate a VT102, VT220, or
Tektronix 4014. There are also various xterm emulations (although
there is no physical terminal named "xterm"). If you want pixmaps
but don't need the Tektronix 4014 emulation (a vector graphics
you may use
eterm. Predecessors to
xvt. One way to change font size in xterm is to right click the
mouse while holding down the Ctrl key.
For non-Latin alphabets, kterm is for Kanji terminal emulation (or for other non-Latin alphabets) while xcin is for Chinese. There is also 9term emulation. This seems to be more than just an emulator as it has a built-in editor and scroll-bars. It was designed for Plan 9, a Unix-like operating system from AT&T.
Unless you are using X Window with a large display, a real terminal is often nicer to use than emulating one. It usually has better resolution for text, and has no disk drives to make annoying noises.
For the VT series terminals there is a test program:
to help determine if a terminal behaves correctly like a vt53, vt100,
vt102, vt220, vt320, vt420 etc. There is no documentation but it has
menus and is easy to use. To compile it run the configure script and
then type "make". It may be downloaded from:
The console for a PC Linux system is normally the computer monitor in text mode. It emulates a terminal of type "Linux" and the escape sequences it uses are in the man page: console_codes. There is no way (unless you want to spend weeks rewriting the kernel code) to get it to emulate anything else. Setting the TERM environment variable to any type of terminal other than "Linux" will not result in emulating that other terminal. It will only result in a corrupted interface since you have falsely declared (via the TERM variable) that your "terminal" is of a type different from what it actually is. See Don't Use TERM For Emulation
In some cases, the console for a Linux PC is a text-terminal. One may recompile Linux to make a terminal receive most of the messages which normally go to the console. See Make a Serial Terminal the Console.
The "Linux" emulation of the monitor is flexible and has features which go well beyond those of the vt102 terminal which it was intended to emulate. These include the ability to use custom fonts and easily re-map the keyboard. These extra features reside in the console driver software (including the keyboard driver). The console driver only works for the monitor and will not work for a real terminal even if it's being used for the console. Thus the "console driver" is really a "monitor driver". In the early days of Linux one couldn't use a real terminal as the console so "monitor" and "console" were once always the same thing.
The stty commands work for the monitor-console just like it was a real terminal. They are handled by the same terminal driver that is used for real terminals. Bytes headed for the screen first go thru the terminal (tty) driver and then thru the console driver. For the monitor some of the stty commands don't do anything (such as setting the baud rate). You may set the monitor baud rate to any allowed value (such as a slow 300 speed) but the actual speed of putting text on the monitor screen will not actually change. The file /etc/ioctl.save stores stty settings for use only when the console is in single user mode (but you are normally in multiuser-user mode). This is explained (a little) in the init man page.
Many commands exist to utilize the added features provided by the console-monitor driver. Real terminals, which use neither scan codes nor VGA cards, unfortunately can't use these features. To find out more about the console see the Keyboard-and-Console-HOWTO. Also see the various man pages about the console (type "man -k console"). Unfortunately, much of this documentation is outdated.
Emulators often don't work quite right so before purchasing software you should try to throughly check out what you will get.
Unless you want to emulate the standard vt100 (or close to it) or a Wyse 60, there doesn't seem to be much free terminal emulation software available for Linux. The free programs minicom and seyon (only for X Window) can emulate a vt100 (or close to it). Seyon can also emulate a Tektronix 4014 terminal. See Wyse 60 emulator
Minicom may be used to emulate a directly connected terminal by simply starting minicom (after configuring it for the serial port used). Of course, you don't dial out and when you want to quit (after you logout from the other PC) you use minicom's q command to quit without reset since there is no modem to reset. When minicom starts, it automatically sends out a modem init string to the serial port. But since there's no modem there, the string gets put after the "login:" prompt. If this string is mostly capital letters, the getty program (which runs login) at the other PC may think that your terminal has only capital letters and try to use only capital letters. To avoid this, configure the modem init strings sent by minicom to null (erase the init strings).
The terminal emulator "Procomm" (which is from Dos), can be used on a Linux PC if you run dosemu to emulate Dos. For details see: http://solarflow.dyndns.org/pcplus/.
There's a specialized Linux distribution: Serial Terminal Linux. It will turn a PC to into a minicom-like terminal. It's small (fits on a floppy) and will not let you use the PC for any other purpose (when it's running). See http://www.eskimo.com/~johnnyb/computers/stl/. It will let you have more than one session running (similar to virtual terminals), one for each serial port you have.
TERM (non-free commercial software from Century Software) Terminal Emulator can emulate Wyse60, 50; VT 220, 102, 100, 52: TV950, 925, 912; PCTERM; ANSI; IBM3101; ADM-1l; WANG 2110. Block mode is available for IBM and Wyse. It runs on a Linux PC.
Emulators exist which run on non-Linux PCs. They permit you to
use a non-Linux-PC as a terminal connected to a Linux-PC. Under DOS
procomm. Windows comes with
"HyperTerminal" (formerly simply called "Terminal" in Windows 3.x and
DOS). Competing with this is "HyperTerminal Private Edition"
http://www.hilgraeve.com/htpe/index.html which is non-free to
business. It can emulate vt-220. The Windows "terminals" are
intended for calling out with a modem but they should also work as
directly connected terminals?? Turbosoft's TTWin can emulate over 80
different terminals under Windows. See
http://www.turbosoft.com.au/ (Australia). See also
For the Mac Computer there is emulation by Carnation Software http://www.carnationsoftware.com/carnation/HT.Carn.Home.html
One place to check terminal emulation products is Shuford's site, but it seems to lists old products (which may still work OK). The fact that most only run under DOS (and not Windows) indicates that this info is dated. See http://www.cs.utk.edu/~shuford/terminal/term_emulator_products.txt. | 1 | 24 |
<urn:uuid:ad9b73a0-b42f-4ae9-b0e3-b2e4fd09b8f1> | In computing, a spell checker (or spell check) is an application program that flags words in a document that may not be spelled correctly. Spell checkers may be stand-alone, capable of operating on a block of text, or as part of a larger application, such as a word processor, email client, electronic dictionary, or search engine.
Eye have a spelling chequer,
It came with my Pea Sea.
It plane lee marks four my revue
Miss Steaks I can knot sea.
Eye strike the quays and type a whirred
And weight four it two say
Weather eye am write oar wrong
It tells me straight a weigh.
Eye ran this poem threw it,
Your shore real glad two no.
Its vary polished in its weigh.
My chequer tolled me sew.
A chequer is a bless thing,
It freeze yew lodes of thyme.
It helps me right all stiles of righting,
And aides me when eye rime.
Each frays come posed up on my screen
Eye trussed too bee a joule.
The chequer pours o'er every word
Two cheque sum spelling rule.
A basic spell checker carries out the following processes:
- It scans the text and extracts the words contained in it
- It then compares each word with a known list of correctly spelled words (i.e. a dictionary). This might contain just a list of words, or it might also contain additional information, such as hyphenation points or lexical and grammatical attributes.
- An additional step is a language-dependent algorithm for handling morphology. Even for a lightly inflected language like English, the spell-checker will need to consider different forms of the same word, such as plurals, verbal forms, contractions, and possessives. For many other languages, such as those featuring agglutination and more complex declension and conjugation, this part of the process is more complicated.
It is unclear whether morphological analysis[clarification needed] provides a significant benefit for English, though its benefits for highly synthetic languages such as German, Hungarian or Turkish are clear.
As an adjunct to these components, the program's user interface will allow users to approve or reject replacements and modify the program's operation.
An alternative type of spell checker uses solely statistical information, such as n-grams. This approach usually requires a lot of effort to obtain sufficient statistical information and may require a lot more runtime storage. This method is not currently in general use.
In some cases spell checkers use a fixed list of misspellings and suggestions for those misspellings; this less flexible approach is often used in paper-based correction methods, such as the see also entries of encyclopedias.
Research extends back to 1957, including spelling checkers for bitmap images of cursive writing and special applications to find records in databases in spite of incorrect entries. In 1961, Les Earnest, who headed the research on this budding technology, saw it necessary to include the first spell checker that accessed a list of 10,000 acceptable words. Ralph Gorin, a graduate student under Earnest at the time, created the first true spelling checker program written as an applications program (rather than research) for general English text: Spell for the DEC PDP-10 at Stanford University's Artificial Intelligence Laboratory, in February 1971. Gorin wrote SPELL in assembly language, for faster action; he made the first spelling corrector by searching the word list for plausible correct spellings that differ by a single letter or adjacent letter transpositions and presenting them to the user. Gorin made SPELL publicly accessible, as was done with most SAIL (Stanford Artificial Intelligence Laboratory) programs, and it soon spread around the world via the new ARPAnet, about ten years before personal computers came into general use. Spell, its algorithms and data structures inspired the Unix ispell program.
The first spell checkers were widely available on mainframe computers in the late 1970s. A group of six linguists from Georgetown University developed the first spell-check system for the IBM corporation.
The company Software Concepts, Inc., founded by William J. Tobin in 1978, developed one of the first patented computer software programs in the United States for spelling verification. The program was used by most major word-processing and photo-typesetting systems, including Lanier, Philips, and Xerox, among many others. The patent the company was issued in 1980 for the Spell-Checking program was one of the first software patents issued in the United States, Canada, and Europe.
The first spell checkers for personal computers appeared for CP/M and TRS-80 computers in 1980, followed by packages for the IBM PC after it was introduced in 1981. Developers such as Maria Mariani, Soft-Art, Microlytics, Proximity, Circle Noetics, and Reference Software rushed OEM packages or end-user products into the rapidly expanding software market, primarily for the PC but also for Apple Macintosh, VAX, and Unix. On the PCs, these spell checkers were standalone programs, many of which could be run in TSR mode from within word-processing packages on PCs with sufficient memory.
However, the market for standalone packages was short-lived, as by the mid 1980s developers of popular word-processing packages like WordStar and WordPerfect had incorporated spell checkers in their packages, mostly licensed from the above companies, who quickly expanded support from just English to European and eventually even Asian languages. However, this required increasing sophistication in the morphology routines of the software, particularly with regard to heavily-agglutinative languages like Hungarian and Finnish. Although the size of the word-processing market in a country like Iceland might not have justified the investment of implementing a spell checker, companies like WordPerfect nonetheless strove to localize their software for as many as possible national markets as part of their global marketing strategy.
Recently, spell checking has moved beyond word processors as Firefox 2.0, a web browser, has spell check support for user-written content, such as when editing Wikitext, writing on many webmail sites, blogs, and social networking websites. The web browsers Google Chrome, Konqueror, and Opera, the email client Kmail and the instant messaging client Pidgin also offer spell checking support, transparently using GNU Aspell as their engine. Mac OS X now has spell check systemwide, extending the service to virtually all bundled and third party applications.
The first spell checkers were "verifiers" instead of "correctors." They offered no suggestions for incorrectly spelled words. This was helpful for typos but it was not so helpful for logical or phonetic errors. The challenge the developers faced was the difficulty in offering useful suggestions for misspelled words. This requires reducing words to a skeletal form and applying pattern-matching algorithms.
It might seem logical that where spell-checking dictionaries are concerned, "the bigger, the better," so that correct words are not marked as incorrect. In practice, however, an optimal size for English appears to be around 90,000 entries. If there are more than this, incorrectly spelled words may be skipped because they are mistaken for others. For example, a linguist might determine on the basis of corpus linguistics that the word baht is more frequently a misspelling of bath or bat than a reference to the Thai currency. Hence, it would typically be more useful if a few people who write about Thai currency were slightly inconvenienced, than if the spelling errors of the many more people who discuss baths were overlooked.
The first MS-DOS spell checkers were mostly used in proofing mode from within word processing packages. After preparing a document, a user scanned the text looking for misspellings. Later, however, batch processing was offered in such packages as Oracle's short-lived CoAuthor. This allowed a user to view the results after a document was processed and only correct the words that he or she knew to be wrong. When memory and processing power became abundant, spell checking was performed in the background in an interactive way, such as has been the case with the Sector Software produced Spellbound program released in 1987 and Microsoft Word since Word 95.
In recent years, spell checkers have become increasingly sophisticated; some are now capable of recognizing simple grammatical errors. However, even at their best, they rarely catch all the errors in a text (such as homophone errors) and will flag neologisms and foreign words as misspellings. Nonetheless, spell checkers can be considered as a type of foreign language writing aid that non-native language learners can rely on to detect and correct their misspellings in the target language.
Spell-checking non-English languages
English is unusual in that most words used in formal writing have a single spelling that can be found in a typical dictionary, with the exception of some jargon and modified words. In many languages, however, it is typical to frequently combine words in new ways. In German, compound nouns are frequently coined from other existing nouns. Some scripts do not clearly separate one word from another, requiring word-splitting algorithms. Each of these presents unique challenges to non-English language spell checkers.
Context-sensitive spell checkers
Recently, research has focused on developing algorithms which are capable of recognizing a misspelled word, even if the word itself is in the vocabulary, based on the context of the surrounding words. Not only does this allow words such as those in the poem above to be caught, but it mitigates the detrimental effect of enlarging dictionaries, allowing more words to be recognized. For example, baht in the same paragraph as Thai or Thailand would not be recognized as a misspelling of bath. The most common example of errors caught by such a system are homophone errors, such as the bold words in the following sentence:
- Their coming too sea if its reel.
The most successful algorithm to date is Andrew Golding and Dan Roth's "Winnow-based spelling correction algorithm", published in 1999, which is able to recognize about 96% of context-sensitive spelling errors, in addition to ordinary non-word spelling errors. A context-sensitive spell checker appears in Microsoft Office 2007, Google Wave, Ginger Software and in Ghotit Dyslexia Software context spell checker tuned for people with dyslexia.
Some critics[who?] of technology and computers have attempted to link spell checkers to a trend of skill losses in writing, reading, and speaking. They claim that the convenience of computers has led people to become lazy, often not proofreading written work past a simple pass by a spell checker. Supporters[who?] claim that these changes may actually be beneficial to society, by making writing and learning new languages more accessible to the general public. They claim that the skills lost by the invention of automated spell checkers are being replaced by better skills, such as faster and more efficient research skills. Other supporters of technology point to the fact that these skills are not being lost to people who require and make use of them regularly, such as authors, critics, and language professionals.
An example of the problem of completely relying on spell checkers is shown in the Spell-checker Poem above. It was originally composed by Dr. Jerrold H. Zar in 1991, assisted by Mark Eckman with an original length of 225 words, and containing 123 incorrectly used words. According to most spell checkers, the poem is valid, although most people would be able to tell at a simple glance that most words are used incorrectly. As a result, spell checkers are sometimes derided as spilling chuckers or similar, slightly misspelled names.
Not all of the critics are opponents of technological progress, however. An article based on research by Galletta et al. reports that in the Galletta study, higher verbal skills are needed for highest performance when using a spell checker. The theory suggested that only writers with higher verbal skills could recognize and ignore false positives or incorrect suggestions. However, it was found that those with the higher skills lost their unaided performance advantage in multiple categories of errors, performing as poorly as the low verbals with the spell-checkers turned on. The conclusion points to some evidence of a loss of skill.
See also
|Wikimedia Commons has media related to: Spell checking|
- Cupertino effect
- Grammar checker
- Record linkage problem
- Spelling suggestion
- Approximate string matching
- Words (Unix)
- Earnest, Les. "The First Three Spelling Checkers". Stanford University. Retrieved 10 October 2011.
- Peterson, James (Dec 1980). Computer Programs for Detecting and Correcting Spelling Errors. Retrieved 2011-02-18.
- Earnest, Les. Visible Legacies for Y3K. Retrieved 2011-02-18.
- "Georgetown U Faculty & Staff: The Center for Language, Education & Development". Retrieved 2008-12-18., citation: "Maria Mariani... was one of a group of six linguists from Georgetown University who developed the first spell-check system for the IBM corporation."
- "Mr. Tobin has been awarded 15 patents in the past 40 years". WilliamJTobin.com. Retrieved 2011-05-18.
- Banks, T. (2008). Foreign Language Learning Difficulties and Teaching Strategies. (pp. 29). Master's Thesis, Dominican University of California. Retrieved 19 March 2012.
- Journal Article. SpringerLink. Retrieved 22 September 2010.
- Walt Mossberg (4 January 2007). "Review". Wall Street Journal. Retrieved 24 September 2010.
- "Google Operating System". googlesystem.blogspot.com. Retrieved 25 September 2010. "Google's Context-Sensitive Spell Checker". May 29, 2009.
- "Ginger Software - The World's Leading Grammar and Spell Checker". Gingersoftware.com.com. Retrieved 19 June 2011.
- "Ghotit Dyslexia Software for People with Learning Disabilities". Ghotit.com. Retrieved 25 September 2010.
- Baase, Sara. A Gift of Fire: Social, Legal, and Ethical Issues for Computing and the Internet. 3. Upper Saddle River: Prentice Hall, 2007. Pages 357-358. ISBN 0-13-600848-8.
- Jerrold H. Zar. "Candidate for a Pullet Surprise". Northern Illinois University. Retrieved 24 September 2010.
- "Retired faculty page". NIU.edu. Retrieved 6 May 2010.
- Richard Nordquist. "The Spell Checker Poem, by Mark Eckman and Jerrold H. Zar". About.com. Retrieved 24 September 2010.
- Education.com Is Spell Check Creating a Generation of Dummies?
- Norvig.com, "How to Write a Spelling Corrector", by Peter Norvig
- BBK.ac.uk, "Spellchecking by computer", by Roger Mitton
- CBSNews.com, Spell-Check Crutch Curtails Correctness, by Lloyd de Vries
- NIU.edu, Candidate for a Pullet Surprise - Complete corrected poem
- Microsoft Word Spelling and Grammar Check Demonstration
- Stilus, Daedalus' spell, grammar and style checker | 1 | 2 |
<urn:uuid:e31e7c22-c25c-4866-9dfd-f848fd35f191> | - Senate officials in both parties say the starting point for any power-sharing agreement would be the organizing resolution struck in 2001, the only other time the chamber has been evenly split. Under it, Republicans held the chairmanships, but committee budgets were split evenly. — “Optimum Online - News - AP News - Evenly split Senate could”,
- Learn about Evenly on . Find info and videos including: How to Cut Hair Evenly, How to Trim Hedges Evenly, How to Pluck Eyebrows Evenly and much more. — “Evenly - ”,
- Shop for Evenly. Price comparison, consumer reviews, and store ratings on . — “Evenly - - Product Reviews, Compare Prices, and Shop at”,
- All-Clad Cop-R-Chef The All Clad Cop-R-Chef Collection features an extra-thick copper exterior. The All Clad cooking surface is hand-polished 1810 stainless steel that w. — “All-Clad Cop-R-Chef - All Clad Cookware + Bonus offer”,
- Definition of evenly in the Online Dictionary. Meaning of evenly. Pronunciation of evenly. Translations of evenly. evenly synonyms, evenly antonyms. Information about evenly in the free online English dictionary and encyclopedia. — “evenly - definition of evenly by the Free Online Dictionary”,
- Ron Paul's support is more evenly distributed than other candidates. July 18, 2007 by disinter. ***yzed website traffic to each of the GOP candidate's websites. The reason that Ron Paul's constitutency is more evenly distributed than the corporate and special interest bagmen is that Ron's. — “Ron Paul's support is more evenly distributed than other”,
- AP Breaking News & Headlines. Find the latest national, world and local Oregon news from the Associated Press. Get top news stories on Weather, Politics, Sports, Business, Entertainment, Health & more from AP News Wire. Browse AP Photos & Videos Evenly split Senate could look to power sharing. — “Evenly split Senate could look to power sharing | ”,
- Definition of evenly from Webster's New World College Dictionary. Meaning of evenly. Pronunciation of evenly. Definition of the word evenly. Origin of the word evenly. — “evenly - Definition of evenly at ”,
- Evenly. Learn about Evenly on . Get information and videos on Evenly including articles on factorial, gatherered, antenna and more!. — “Evenly | Answerbag”,
- Take the phrase "divide evenly." In math it means when you divide two numbers, there is The Greatest Common Factor is the largest integer that will divide evenly into any two or more integers. — “Excel Math: Divide evenly”,
- Brown: Evenly divided voters mean tricky governing. By Fred Brown In a state as evenly divided politically as Colorado — and it is almost a mathematical improbability how evenly divided it is — a statewide. — “Brown: Evenly divided voters mean tricky governing - The”,
- Add dry cake mix evenly over degrees for 1 hour. Ingredients: 5 evenly. Spread the cherry pie filling over the pineapple. Spread DRY cake mix evenly over Serve with. — “ - Recipes - Cherry Dump Cake”,
- A Wall Street Journal/NBC News poll in January showed that voters for the first time since 2003 don't favor a Democratic majority in Congress. Voters in the poll were evenly split over which party they thought should run Congress. — “Poll Shows Democrats Losing Their Edge - ”,
- Definition of word from the Merriam-Webster Online Dictionary with audio pronunciations, thesaurus, Word of the Day, and word games. — “Evenly - Definition and More from the Free Merriam-Webster”, merriam-
- More Americans call Congress' passage of a healthcare reform bill "a good thing" (49%) than call it "a bad thing" (40%). Reaction is predictably partisan, with independents evenly divided. — “By Slim Margin, Americans Support Healthcare Bill's Passage”,
- Those Americans who care about the major league baseball strike are evenly divided on whether the owners or the players are right, but one American in three doesn't care or has no opinion, the latest New York Times/CBS News Poll shows. The box. — “Poll Shows Americans Are Evenly Divided on Strike - ”,
- evenly in a level and regular way Related Videos: evenly. Top. Click to Play. How to Create keyword-based Site-Specific Searches. Click to Play. Trials HD RedLynx TV Episode 6. Click to Play. How to Apply Auto Levels to an Image in Photoshop CS3. — “evenly: Information from ”,
- evenly (comparative more evenly, superlative most evenly) So as to make flat. Spread the To avoid arguments, he divided the sweets evenly between his two children. (mathematics) In a manner that. — “evenly - Wiktionary”,
- Iowans are almost evenly divided about whether they would vote for or against a constitutional amendment to end marriage for same-*** couples, according to The Des Moines Register's new Iowa Poll. — “Iowa Poll: Iowans evenly divided on gay marriage ban”,
- Spoon the mashed potatoes into the bottoms of each prepared ramekin, spreading them evenly with a rubber spatula. Pour the cornbread batter evenly over the meat layer in each ramekin. — “Christmas Shepherd's Pie - Paula Deen's Recipes, Home Cooking”,
- Hair Coloring Brush - Evenly Distributes Hair Color. The Salon Perfect Hair Coloring Brush gives you salon results right in your own home. The Hair Coloring Brush evenly distributes your favorite hair color fast, safe, and easy. With just 3. — “Hair Coloring Brush - Hair Color Comb Evenly Distributes Hair”,
- English Translation for evenly - dict.cc German-English Dictionary. — “dict.cc | evenly | English Dictionary”, dict.cc
- It's a question of spreading the available energy, aerobic and anaerobic, evenly over four minutes. Admittedly, scientific authority is not distributed evenly throughout the body of scientists; some distinguished members of the profession. — “Definition of Evenly”,
- We found 24 dictionaries with English definitions that include the word evenly: Click on the first link on a line below to go directly to a page where "evenly" is defined. General (21 matching dictionaries) evenly: Macmillan Dictionary [home, info]. — “Definitions of evenly - OneLook Dictionary Search”,
- This is a page about evenly. This page includes the Etymology and sound of this word, as well as some additional information. This page also contains information about: learn english, pronunciation of, audio, .WAV, speak english, evenly.wav,. — “evenly - definition of evenly - ”,
related images for evenly
- making the start point somewhere along the flat portion along the top the reason for this will become apparent later With these points selected in order create a new closed spline
- Loading system spreads manure evenly across entire belt
- Using the notched spreader spread the goop around evenly I think it does better if you level it all in one direction before you roll it Then use the textured roller and roll it out like you want it You can change the texture by roller pressure so you want to keep your pressure uniform and always roll in the same direction
- evenly2b png
- evenly 2 seedpoint 54 8 134 6 100 3 distance 5 of fiber 246
- Cover evenly making sure edges aren t too thin
- Evenly spaced streamlines on the surface of a gas engine simulation Perspective foreshortening is utilized and the density of streamlines further away from the viewpoint is
- Freckled Decolletage After Call us today on 1300 666 244 and find out how Total Body can help you look younger and feel better
- La pantalla del R60 Aura es bien iluminada uniformemente
- Experienced Workers Experienced highly skilled employees enable Norman Vetter Inc to do the quality work we are known for Our company runs three crews of about seven men each and a full time crane operator
- Pick each piece of bread up and roll it around in the custard mixture to coat all sides Smoosh the slices down a little to get them to soak up more of the mixture
- Acanthus leaf with salt crystals scattered evenly on the dorsal surface jpg
- EndIf Until Finished = True evenly 1 seedpoint distance 10 of fiber 96 evenly 2 seedpoint 54 8 134 6 100 3 distance 5 of fiber 246
- Section of the crowd The on site Auctioneer evenly balanced at last
- Sculpt seam evenly from top to bottom
- first previous
- Evenly Matched jpg
- Give them a stir to combine then spread them about evenly in your pan Sprinkle on a little kosher salt and freshly cracked black pepper to taste
- 2009 Stocking a boat is used to evenly distribute fish around
- 21 evenly music JPG
- evenly2a png
- to create something unexpected Too much of a good thing will make a room feel jumbled and messy but bold color balanced by blocks of neutral sofa flooring walls will be lovely Which is the accent color This room is a little confusing to me it s not so easy to pick out the accent color Accessories are red pillow magazine basket flowers book covers No
- Streamlines placed on a regular grid using Runge Kutta integration Evenly spaced streamlines
- How is Power Evenly Distributed
- 308 324 4576 click on pictures to enlarge then click on back button to return
- evenly2c png
- A second pigmented coat of Dragon Skin® is applied evenly over
- Adding ads for dayparts let the system distribute your spots evenly
- <strong>Salkantay Trek< strong> An at least evenly beautiful
- Sprinkle crumble evenly over the top and bake for about 60 minutes
- compressor a bit more until all was koscher You can tell from the backside if it s going in right because those legs should come up to the inner clutch face at the same time as seen below If you don t trust the torx screws that came out replace them Blue locktite them you don t want a clutch coming apart at 100mph lol but only do part of the bottom shank there s no need to
- Many turns required to tighten the spokes evenly
- Clean up the front axle with paraffin and autosol Apply some lithium general purpose grease to the shaft and copper grease to the thread Spread the lithium grease evenly over the shaft
- This tip from a reader published in The Farmer Dakota Edition September 3 1960 advises the use of cellophane tape for trimming bangs But
- Spread the noodles out evenly Top with pieces of sliced gouda and American cheese then sprinkle with shredded mozzarella Use enough to cover the noodles How much you add is completely up to you
- Now lay the ties down
- When you re done each chicken tender should be totally coated with sesame seeds They should look about like this
- Spread 30 of the baby carrots about evenly Strew about the diced celery and onion
- As the meat cooks flip the pieces over to get all sides Deglaze your pans When the meat has developed a nice brown crust on most sides add about 2 tablespoons of water to each pan to deglaze it Scrape the bottom of each pan with a wooden spoon
- click on pictures to enlarge then click on back button to return
related videos for evenly
- Betty's Triple Chocolate Frozen Ice Cream Pie Recipe In this video, Betty demonstrates how to make a Triple Chocolate Frozen Ice Cream Pie for her husband , Rick, as a Father's Day treat. This dessert is sinfully rich! I don't recommend making it on a daily basis! Ingredients: 11 3/4 oz. jar hot fudge topping 10-inch ready-to-fill chocolate pie crust 1 quart chocolate ice cream, softened frozen whipped topping, thawed chocolate curls (from milk, dark, or unsweetened chocolate) toasted almond slices (or slivers) Spread 1/3 cup hot fudge topping(at room temperature) evenly over the bottom of the chocolate pie crust. Spoon the softened chocolate ice cream over the fudge topping, and spread evenly. The filling should reach the top of the pie crust. Freeze the pie until firm (about 4 hours). Remove the pie from the freezer, and spread another 1/3 cup of hot fudge topping over the top. (You may need to microwave this a bit to make it spreadable, but don't get it too hot, or it will melt the cake.) Gently spoon whipped topping over the fudge topping, and spread it in an attractive manner. Sprinkle with chocolate curls and toasted almonds. Place the pie back in the freezer for 8 hours or overnight. When ready to serve, cut out wedges and place them on nice serving plates. Decadent, but scrumptious!!!
- How to Draw Kobe Bryant: Step by Step (YOUDRAW by Merrill Kazanjian) Link- Today, YOU are going to draw Kobe Bryant. It doesn't matter if you have prior art training or not. This video will break the process down in to simple steps so that anyone can do it. Grab a pencil and paper and let me show you what I mean. Remember to pause the video when you need to. Here we go. Step 1: Draw an oval shape for Kobe's head. Notice that the bottom of the oval looks like an upside down trapezoid while the top of the oval shape is rounded. Step 2: Make four horizontal lines. The top line MUST intersect the midway point of the oval shape. The lowest line should be placed at the bottom of the chin. After drawing the top and bottom lines, add two EVENLY SPACED lines between the top and bottom line. At the end of step two, you should have THREE evenly spaced segments between the top and bottom lines (refer to the picture). Step 3: Observe the six dots that I added. The top four map out the corners of Kobe's eyes, while the bottom two map out the corners of his mouth. Lets look at their relationship. Notice that these dots are leaning towards the right side of the oval. This is due to the fact that Kobe's head is slightly turned. Also, take a second to notice the distance in between the two eyes is equal to one eye length. Finally, notice that the middle part of the space designated for each eye, will line up with the outer corners of the mouth. Now, draw in the six dots. Step 4: Draw in the eyes....The upper eyelid as a rainbow shape and the ...
- Recipes Using Cake Mixes: #12 Chocolate Peanut Butter Fudge Bars Music by: Jason Shaw I was looking for a good recipe for a colleague at work who love peanut butter and chocolate. I found this one on the Woman's Day website. It was love at first sight. INGREDIENTS 1 box Chocolate Brownies mix 1⁄2 cup peanut butter chips 1⁄2 cup dry roasted peanuts (original recipe says chopped, I left them whole) Peanut Butter Filling 3⁄4 cup creamy peanut butter 1 cup marshmallow cream (such as Marshmallow Fluff or Creme) 3⁄4 stick (6 Tbsp) butter, softened 3⁄4 cup confectioners' sugar (also called icing sugar or powdered sugar) Chocolate Glaze 6 oz bittersweet baking chocolate, coarsely chopped 5 Tbsp butter 1 Tbsp corn syrup PREPARATION 1. Heat oven to 350°F. Line a 9-in. square pan with foil, letting foil extend above pan on opposite sides. Coat foil with nonstick spray. 2. Prepare brownie mix as package directs for fudgy brownies. Stir in peanut butter chips and peanuts. Spread evenly in prepared pan. 3. Bake 30 minutes, or until a wooden pick inserted in center comes out with moist crumbs attached. Cool completely in pan on a wire rack. 4. Filling: Beat peanut butter, marshmallow cream and butter in a medium bowl with mixer on high speed until well blended. Reduce speed to low, add confectioners' sugar and beat until blended. Spread evenly over brownie. 5. Glaze: Microwave chocolate and butter, stirring at 30-second intervals, until melted and smooth. Cool slightly; stir in corn syrup. Spread evenly over Filling. Refrigerate 1 ...
- Evenly 0dd T00 The EPIC sequal to Evenly 0dd
- Betty's Grilled Pepperoni-Pesto Sandwich Recipe In this video, Betty demonstrates how to make Grilled Pepperoni-Pesto Sandwiches. These are made with sliced Italian bread, filled with mozzarella cheese, pesto sauce, pepperoni, and pizza sauce, and then grilled to perfection on a stove top. Ingredients (for 4 sandwiches): about 8 oz. sliced mozzarella cheese, or as desired (8) 1-inch thick slices Italian bread ¼ cup pizza sauce ¼ cup pesto sauce 20 to 24 slices pepperoni 2 tablespoons butter, softened For one sandwich: Place 1 cheese slice on top of a slice of Italian bread. Spread evenly with pizza sauce. Top with another cheese slice, and spread evenly with pesto sauce. Top with another slice of cheese, and arrange 5 or 6 pepperoni slices over the top. Top with another cheese slice and finish the sandwich with a second slice of Italian bread. Spread a small amount of butter on top of the sandwich. Invert the sandwich onto a hot nonstick skillet or griddle, and cook over medium heat until browned. Spread a small amount of butter on the ungrilled side of the sandwich. Turn, and cook until brown. Serve immediately. This is one way you can use the pesto sauce we made recently. Tomorrow I will be showing you how to use pesto sauce with pasta—stay tuned! Love, Betty ♥
- Betty's Chocolate-y Peanut Butter Bars Recipe In this video, Betty demonstrates how to make her ever-popular Chocolate-y Peanut Butter Bars. You can serve them for dessert, or pack them with your lunch for a burst of energy later in the day! Ingredients: 1 cup chunky peanut butter (You may use smooth peanut butter, if you prefer.) 1 stick butter or margarine, melted and cooled to room temperature 2 eggs 18.25-oz. package butter cake mix (You may use yellow cake mix.) 6 oz. semisweet chocolate chips 14 oz. can sweetened condensed milk cooking oil spray (for oiling baking dish) In a large mixing bowl, combine 1 cup peanut butter, 1 stick melted butter or margarine, 2 eggs, and an 18.25-oz. package of butter cake mix. Beat at medium speed of an electric mixer for 2 minutes. Spray a 13-inch by 9-inch by 2-inch baking dish with cooking oil spray. Press half of the cake mix mixture evenly into the oiled dish. You may need to use your hands to do this. (Spraying hands with oil will keep the dough from sticking to your hands.) Bake at 350 degrees for 10 minutes. Sprinkle 6 oz. semisweet chocolate chips evenly over the top, and drizzle 14 oz. of sweetened condensed milk evenly over that. Place the remaining half of the cake mix mixture on top of the sweetened condensed milk layer. You will need to spoon out teaspoonfuls and place them evenly over the top of the other layers. Cover as much of the top as possible, but be aware that the topping will bake and connect together to provide a complete layer. Bake at 350 degrees for ...
- Chris Montez - Lets Dance (Stereo DRUMS Remix !!) Classic Rocker Remixed with Center Drums & evenly Widened for Power
- MIT Physics Demo -- Strobe of a Falling Ball A ball is dropped in front of a meter stick and lit by a strobe light. A long exposure photograph captures the position of the ball at each evenly spaced flash of light. The acceleration of the ball can then be measured from the photo. Note that the frame rate of the video capture (30fps) is quite close to the strobe rate (15Hz). This is why the strobe flashes in the slow motion video don't appear to be exactly evenly timed. See the final image on Flickr - See the original video on MIT TechTV - techtv.mit.edu
- Cantata 51, Elizabeth Parcells JS Bach, complete hi res version at interview comment from 2005 Elizabeth: I performed Cantata 51 before it was recorded but never where the Chorale and Alleluia were the same tempo and that worked perfectly. Those two tempos are identical. There's no tempo change indicated. The conductor did it that way and it worked. Boy was I glad I had practiced singing scales, quads, and triplets with a metronome in good time. That is what the music demands. The conductor and I were able to deliver that because I'd practiced the techniques and could sing my melismas evenly. When all the notes are created differently the melismas get choppy. The singer has to smooth out the legato and sing the notes evenly in rhythm. That has to be trained. When you get to your professional thing and that's what the conductor wants and what the music demands you want to know that you are technically ready to deliver. If there is a lack of precision it means the singer has neglected their exercises.
- Pest Control Tips : Will Salt Kill Fleas? Salt will kill fleas if it is evenly applied throughout a carpet, left for several days, and then vacuumed up after about a week. Rake highly refined salt into a home carpet to kill fleas with instructions from a certified exterminator and arborist in this free video on pest control.
- Kimmaytube ♡♡♡ Flat Ironed Hair Part 1 Video sponsored by READ, READ READ! Hi Everyone, sorry if you've been waiting, but I've said a million times before, when work calls, I have to answer!! FAQ's Q) Did you TRIM your hair? Will you trim it while it's straight? A) No. I did not trim my hair. Surprisingly, my ends were fairly even considering that I do not trim them evenly, ever. I only trim off knots and weathered ends. I am not going to trim my hair while it's straight. I wear it *** 99% of the time. Why would I change up a strategy that has gotten me here thus far? There are a lot of opinions and misinformation about trimming, but I don't debate it anymore. :-/ If you prefer to wear your hair straight on a regular basis, evenly trimming your hair may be best for you. Q) Why didn't you make it bone straight? A) Because I didn't want to. Q) Is your hair really waist length? A) Technically, my longest portion of hair (especially my hair in the back on my right) reaches the narrowest point of my waistline. The narrowest part of my waistline is 24". This is wear my hair falls. So as far as I'm concerned, I've reached my goal. :o) HOWEVER, I am going to continue my growth journey for the rest of the year! Waist length for me is a span that continues for another 6" down the length of my body. Q) What is your NEW HAIR GOAL? A) So now, I want the bulk of my hair to be at waist length by August 2011. That will be another 3 - 4" of hair. At my growth rate (6" a year) I should reach my hipbone by ...
- 9800GTX SLI vs 9800GX2 Crysis showdown These cards are evenly priced, how well does each one handle Crysis?
- How to make rocky road cookie pizza - An easy dessert recipe Chocolate chip cookie dough, nuts, marshmallows-they're all our favorite food groups. And if you put them together and bake them in a pizza pan, you just may have the perfect dessert. Pillsbury's Lauren Chattman whips up a rocky road cookie pizza that's fun and delicious, any way you slice it. Recipe: How to make a rocky road cookie pizza 1. Heat oven to 350°F. 2. Grease 12-inch pizza pan with shortening or cooking spray. In pan, break up cookie dough. With floured fingers, press dough evenly in bottom of pan to form crust. 3. Bake 12 to 17 minutes or until light golden brown 4. Sprinkle marshmallows, peanuts and chocolate chips evenly over crust. Drizzle with caramel topping. 5. Bake 8 to 10 minutes longer or until topping is melted. Cool completely, about 1 hour 15 minutes. Cut into wedges. Pillsbury's rocky road cookie recipe makes 16 servings. -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- More Easy Howdini Recipe Videos -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- How to make stuffed crust pizza How to make stromboli How to make easy chicken pot pie
- Zetsubo Sensei Cake cutting scene Cake shake! Season 01, Episode 02. Subs by afk
- Betty's Scalloped Potatoes with Cheese Recipe In this video, Betty demonstrates how to make a large casserole of Scalloped Potatoes with Cheese. This is a perfect accompaniment to her dinner of Herbed Chicken Crunch, Sauteed Zucchini and Yellow Squash, fresh green beans from the Farmer's Market, and a large croissant! Ingredients: 6 medium potatoes, peeled and sliced 1/4" thick (I used Russet, but any variety is fine.) 3 tablespoons butter or margarine 1/4 cup flour 1 teaspoon salt 2 cups milk (for sauce) 2 cups shredded Velveeta cheese (You may substitute any variety of shredded cheese that you like.) 1 tablespoon fresh chopped parsley (or 1 teaspoon dried chopped parsley) 1/4 cup finely chopped onion 2 cups additional milk (for pouring over casserole to come to the top of the potatoes) cooking oil spray salt and black pepper, to taste Prepare the sliced potatoes, and cover them with cold water in a large container, and set aside. In a medium saucepan, melt 3 tablespoons of butter or margarine. Remove from heat and stir in 1/4 cup flour, stirring until lumps are dissolved. Add 1 teaspoon salt, 2 cups milk, and 2 cups shredded Velveeta cheese. Return mixture to low heat, and cook, stirring constantly, until cheese sauce is thick and bubbly. Remove from heat. Stir in 1 tablespoon fresh chopped parsley. Spray a very large casserole dish (13-inch by 9-inch by 2-inch) with cooking oil spray, and place 1/3 of the cheese sauce on the bottom of the dish. Spread it out evenly. Now, place 1/2 of the sliced potatoes in a layer ...
- Bison Steak Frites How to make Bison Steak Frites. Get 15% off your bison order by copying this link into your browser: /classicbison Ingredients 1 large sweet potato, sliced into 1/4 inch sticks 1/4 tsp of ginger 1/4 tsp of corriander 1/4 tsp of cinnamon 1/4 tsp of cumin 1/4 tsp of black pepper 1/4 tsp of salt a pinch of cloves 1/4 tsp of cardomon 1/4 cup of extra virgin olive oil. 2 bison ribeye steaks frisse watercress 2 tbsp of cider vinegar 2 tbsp of apple cider 1 1/2 tsp of brown sugar 1/2 tsp of celery seed 1 tbsp of granulated garlic 1/4 cup of safflower oil blueberries Instructions 1. In a large bowl combine the sweet potato, ginger, corriander, cinnamon, cumin, black pepper, salt, cloves, cardomon, and extra virgin olive oil. Then toss to evenly coat the fries. 2. Spread them out evenly on a baking sheet and place in a preheated 425F oven. They should take about 30-35 minutes to cook. 3. Brush the ribeye steaks with extra virgin olive oil so they don't stick to the grill and season with cracked black pepper and salt-make sure you get both sides. 4. Place the steaks on a preheated grill pan and cook for a total of 7-8 minutes, flipping once. 5. While the steaks rest, assemble the frisse and watercress salad. Add the washed greens to a large bowl and combine with the cider vinegar, apple cider, brown sugar, celery seed, granulated garlic and safflower oil. Toss then season with salt and pepper. 6. Plate the salad and garnish with fresh blueberries. 7. When the ...
- How to Glass A Surfboard : Glassing Surfboard Rails Surfboard glassing colors need to be squeezed tight and evenly distributed around the rails. Distribute surfboard color evenly with tips from a surfboard repair specialist in this free video on surfboards. Expert: Gino Baricello Contact: Bio: Gino Baricello has enjoyed the surf along the Brazilian since he was 11 years old. As a ***ager, he began building his own surf boards in his fathers garage. Filmmaker: sam taybi
- Pro at Cooking - Episode 3 - Dave's Pimpin Pizza Veggie pizza is for noobs. Join Dave and his assistant Elyse in episode 3 of Pro at Cooking as Dave makes three kinds of pizza; veggie, veggie and "real". Look who's the expert on being Chinese. Step 1. Preparing the toppings A: Steam the spinach and be careful not to over-cook it... stop as soon as it's tender. B: Pre-heat and olive oil the pan. Add garlic. Fry chopped mushroom and red pepper with little salt and pepper 'til tender. C: Cut eggplant and zucchini to thin slices. Place in frying pan on medium heat. Switch sides as soon as the eggplant turns brown. zucchini is ready when it's moist. D: Chop Jalapeno Step 2. Assembling three kinds of pizza A: Veggie pizza 1 Evenly distribute a thin layer of sundried tomato pesto sauce on top of a pita. Place pre-cooked spinach on top, then cooked mushroom and red pepper. Add bits of goat cheese, smoked mozzarella, and a spoon full of parmesan cheese. Place the pizza in a pre-heated 425 degree oven (best on a pizza stone) for about 8 minutes. B: Veggie pizza 2 Evenly distribute a thin layer of sundried tomato pesto sauce on top of a pita. Place pre-cooked slices of eggplant and zucchini. Add bits of goat cheese, smoked mozzarella, and black olives. Finally, put a spoon full of parmesan on top. Place the pizza in a pre-heated 425 degree oven (best on a pizza stone) for about 8 minutes. C: REAL pizza 1 Evenly distribute a thin layer of pesto sauce on top of a pita. Put as many slices of Genoa (hot Italian ...
- Mudpie Bars - Vegan Snicker Bars Missing that chewy caramel with chocolate and peanut overtones and a hint of vanilla ? Scrumptious "suitable for all diets" alternative to regular store bought candy bars. RECIPE Mudpie Bars Vegan 1 Cup Rice Syrup 1 Tsp Vanilla 2 Tablespoons Smooth Peanut Butter 4 Cups Rice Crispies 1 Cup Chopped Almonds (or Pecans, Walnuts, Brazils) 200 grams Dark Chocolate Chips 1 Cup Smooth Peanut Butter Method 1. Melt Chocolate and 1 Cup of Peanut Butter in glass bowl 2. Bring rice syrup to boil, stir in vanilla and 2 tablespoons peanut butter and whisk until smooth 3. Add Rice Crispies and chopped nuts, mix well. 4. Sprinkle half of rice crispy nut mixture into a glass dish and flatten with a flat base bowl or clean moistened hands (12" wide, 1.5 inch deep) 5. Whisk the chocolate and peanut butter and pour into the dish, spread evenly 6. Add the remaining rice cirspy nut mixture evenly over the chocolate 7. Pat down with flat base bowl or clean hands and refrigerate to set. Original Recipe by Andy Cunningham of Green Cuisine, Victoria, BC, Canada Check out their recipe book - visit ALLERGY INFORMATION: Contains peanuts and any other nuts you may use. Ensure you check with the person for their nut allergies.
- Tuning Your Tom Tom Drum Using Equal Tention Tuning Your Tom Tom Drum Using Equal Tention - Drum Tuning - Seating the drum Head Evenly - Best Drum Tuning Guide - Drum Rim - Drum Head - Drum Hardware - Tama Imperialstar
- GreekFoodTv☼ Pas***io - Greek Baked Pasta, Ground Meat Sauce, Bechamel Pas***io, is classic Greek comfort food and great festive fare, a rich, filling, dish that kids and grown ups alike love. For recipe, press more button. 8-10 servings ¼ cup extra-virgin Greek olive oil, plus more for the pan 2 medium onions, finely chopped 1 ½ pounds ground lean beef 1 large garlic clove, peeled and minced (optional) 3 cups peeled and chopped plum tomatoes 2 cinnamon sticks 5-6 allspice berries 1 bay leaf Salt and freshly ground black pepper 1 cup dry white wine 1 ½ pounds tubular pasta Grated kefalotyri, to taste For the Béchamel Sauce 4 Tbsp. butter 4 Tbsp. all-purpose flour 4-5 cups warm milk Salt and freshly ground black pepper Grated nutmeg, to taste 2 eggs 1 cup grated kefalotyri cheese 1. In a large skillet, heat the olive oil. Add onion and sauté until soft and translucent, 5-7 minutes, stirring frequently with a wooden spoon. Add ground meat and continue stirring until meat begins to brown. Add tomatoes, salt and pepper, cinnamon sticks, allspice berries, bay leaf, and wine. Stir well to combine. Lower the heat, cover the pot and simmer until the sauce is thick and the meat is cooked, about 35 to 40 minutes (add water if necessary during cooking). Remove pan from heat; let meat cool slightly. 2. Meanwhile, bring a large pot of salted water to a boil and cook pasta until al dente (it should be a littler firmer than normal). Remove and drain. Toss with 1-2 tablespoons olive oil to keep it from sticking. 3. Béchamel sauce: In a medium pot, melt ...
- How to Pack and Smoke a Pipe I thought it would be good for me to share with everyone here how I pack and smoke my pipes (tobacco pipes). There are many different methods on how to 'properly' pack and smoke a pipe to make sure it burns fully and evenly. I've found this technique to be pretty good at both of those. At the end of the day, you want a bowl of evenly and well packed tobacco and a smoking speed that keeps the tobacco burning but without becoming too hot. Let me know if you have any thoughts or any different or better methods below in the comments!
- Tuning Your Tom Tom Drum Using An Evans Torque Key Tuning Your Tom Tom Drum Using An Evans Torque Key Drum Tuning Equal Tention Seating the drum Head Evenly Best Drum Tuning Guide Drum Rim Drum Head Drum Hardware Tama Imperialstar
- Practicing Guitar With A METRONOME Practicing with a metronome may help you in a way that you wouldn't have originally thought. At least that's the case with me! Would you believe me if I told you that 200bpm can sound faster than 216bpm and beyond, at least in a certain way? The irony of playing with a metronome for me personally was that usually, people practice with them to speed up. When I started practicing with them, they slowed me DOWN. But don't let that make you think it won't let you reach your top speed! The reason why a metronome slows you down is because you're not able to completely blaze through what you're playing at the speed that you want to. Instead, the metronome will make you play a lot more evenly. Playing evenly and being on time is MUCH more difficult than playing at a million miles per hour, and it sounds much better. If you play something evenly at a slower tempo with a metronome, then speed it up a bit but play with NO metronome, the slower tempo will come off as faster to your ears. Our ears are more sensitive to timing and being even than they are being fast.
- Tuning Your Tom Tom Drum Using A DrumDial Tuning Your Tom Tom Drum Using A DrumDial Drum Tuning and Equal Tention Seating the drum Head Evenly Best Drum Tuning Guide Drum Rim Drum Head Drum Hardware Tama Imperialstar
- Lenna - Evenly-(LEX Tribal Dub)
- Cleaning Tile : How to Lay Tile Evenly on a Floor Lay your own floor tile with help from a pro. Start from the center of the room, avoid air bubbles and let the first few tiles set before laying more. Master your tile with tips from a professional contractor in this free video on laying floor tile. Expert: Chris Wade Bio: Chris Wade has been a successful contractor for more than 23 years. Filmmaker: Daniel Brea Series Description: Most homes feature tile in kitchens, bathrooms and floors. Maintain all types of tile and grout safely and simply with this free video series on cleaning tile.
- The Amazing Halogen Oven - www.clifford- www.clifford- A revolution in cooking, this oven uses halogen technology to perfectly cook almost any meal, much faster than conventional methods. The infra red halogen element of the oven heats up almost instantly, reducing pre-heating times and the fan-assist function circulates the air so food cooks evenly, without the need for turning. So versatile, the temperature adjustment control allows you to use the halogen oven for defrosting, baking, roasting, steaming and, unlike a microwave, will perfectly brown food. So now you can enjoy crisp pastries, evenly cooked biscuits and juicy, perfectly browned meat. Supplied with 2 racks, which can be used at the same time to cook entire meals, the Halogen Oven is also a healthy cooking method as any fats drain away. If this wasn't enough, the halogen oven has an in-built self-cleaning function, which means no washing up! To find out more visit www.clifford-
- Betty's Swiss Cheese Chicken Casserole Recipe In this video, Betty demonstrates how to make a lovely and flavorful Swiss Cheese Chicken Casserole. It is made of chicken breast tenderloins, Swiss cheese, and chicken soup, with buttery herb seasoned stuffing on top. The casserole bakes for about an hour, making the chicken tender and juicy and the stuffing nice and crispy. Ingredients: 12 boneless, skinless chicken breast tenderloins (about 1 1/2 pounds) 6 oz. shredded Swiss cheese 10 3/4-oz. can condensed cream of chicken soup 1/4 cup milk 2 cups herb-seasoned stuffing mix 1/4 cup butter, melted cooking oil spray Spray a 12-inch by 8-inch by 1 1/2-inch baking dish with cooking oil spray. Place 12 chicken breast tenderloins in a layer on the bottom. Sprinkle the tenderloins evenly with 6 oz. Swiss cheese. In a small bowl, combine 1/4 cup milk and a 10 3/4-oz. can cream of chicken soup. Spread over top of chicken and cheese. Evenly sprinkle 2 cups of herb-seasoned stuffing mix over chicken soup layer. Drizzle 1/4 cup melted butter over top. Cover casserole with aluminum foil, and refrigerate for about 4 hours. Bake at 350 degrees for 55 minutes, or until casserole is nice and bubbly. Uncover casserole, and bake 5 to 10 minutes longer, until top is crisp and beginning to brown. The flavors mingle in this casserole to give an unbelievable taste--and the casserole is so simple to make! You may make this casserole ahead and refrigerate it up to 24 hours before baking, or you can even freeze it for a long peiod of time, then ...
- Part2: How to Draw Face, Front View Step by Step Face Proportions For Portrait Drawing- Formula Hey welcome back everybody....Its Merrill, I recommend that you watch part 1 before you see part 2. Just click on the image if you havent seen part 1 yet. This is a very important video for anyone who wants to learn portraiture. In this video, I will model the formula taught in part 1 to teach you how to draw a face from your memory. In order to make things easy to remember, I will demonstrate my process step by step. People who memorize these steps will be able to draw a human face from memory without a reference image. Lets get started. Step 1: Draw an oval. Next put a horizontal line through the oval, slightly higher than the half way point. Then add four evenly spaced dots. These four dots will mark the inner and outer corners of each eye. Remember that there is one eye length in between the two eyes. It is imperative that the dots are evenly spaced. You will also need two bigger dots to mark the center of each eye. Step 2: Now add a rectangular shape. The rectangle should be taller than it is wide. The corners of the rectangle should line up with the two dots that mark the center of each eye. Step 3: Add the ears and eyebrows. The ears most often line up with the top of the eye and the bottom of the nose. Step 4: Add the eye shape. Generalized eyes are almond shaped. You will see the bottom of the iris but not the top. Most eyes also have a second line for the eyelid above the eye. Step 5: Add the nose. Notice that I did ...
- 7 Things (Miley Cyrus) Guitar Lesson (Part 1 of 2) *Update* For everyone who has asked about the strumming pattern, here is a little lesson on strumming: To mimic my strumming, you would have to do this pattern on every chord (d = down, u = up): dud ud ud udu (change to next chord and repeat...) Now if you're trying to sing at the same time, you would probably change that or simplify it to accommodate your singing- the important thing is, if you strum evenly and in time, a lot of different patterns will sound fine. It doesn't have to be that specific pattern, it just has to be smooth and in time. So I would practice strumming just literally dududu over and over while you sing it, and when you can do that evenly and in time, try changing it up, adding more strums, taking some out. Kris Farrow gives an acoustic instructional on how to play Miley Cyrus's "7 Things". Find the sing-along track available on iTunes under "Acoustic Guitar Karaoke".
- Farécla G3 Paste Compounding Tutorial Prior to compounding, soak a Standard G Mop in clean water and spin off the excess in a bucket. Open the tub of G3 and using a clean cloth, apply a small quantity of G3 to the surface. Place the G Mop face down on top of the compound, applying medium pressure and evenly spread the compound into the surface before starting the machine. Start the polisher and work evenly over the flatted area. It is better to start slowly and increase the speed gradually. Farécla Compounds contain no fillers and thus the surface defects will be permanently removed. Compound the surface until all marks have been removed. During the process, keep the Compounding Foam damp by spraying minimal amounts of clean water from a Farécla water spray bottle. This helps to keep the temperature of the surface down and lubricates the compound. Further sprays of water help the compound to break down fully and restore a high gloss finish in on e compounding operation. Finally, wipe carefully with a Farécla Finishing Cloth.
- Beauty Short: Applying Paint Pots Evenly One way to apply paint pots evenly and smoothly. This is especially useful for metallic/frost paint pots that are more difficult to apply evenly. My tip is to use a fluffy brush to apply them like a MAC 217 brush. Check out my blog: Follow me on Twitter: Thanks for watching! & Please subscribe :)
- Georgian Folk Music - Kelaptari - Sagmiro This is one of Georgian folk songs. Georgian national music is very old and one of the uniques in the world. Georgian folk music possesses what is the oldest tradition of polyphonic music in the world, predating the introduction of Christianity. Tuning Scales used in traditional Georgian music have, like most European scales, octaves divided into seven tones (eight including the octave), but the spacing of the tones is different. As with most traditional systems of tuning, traditional Georgian folk music uses a just perfect fifth. Between the unison and the fifth, however, come three evenly-spaced notes, producing a compressed (compared to most European music) major second, a neutral third, and a stretched perfect fourth. Likewise, between the fifth and the octave come two evenly-spaced notes, producing a compressed major sixth and a stretched minor seventh. This system of tuning renders thirds as the most consonant interval after fifths, which resulted in the third being treated as a stable interval in Georgia long before it acquired that status in Western music. Some consider the Georgian scale a "quintave system" (as opposed to the octave-repeating "octave system"). Due to the neutral tuning within the quintave system, the eighth degree or octave is slightly widened, which often results in a rise in pitch from the beginning of a song to the end. Because of the influence of the Western music and its different system of tuning, present-day performances ...
- Osaka and the Chopsticks Na,Na ..... HEEEEEEEEH xDDDD
- Candy Corn Cookies Yields: 9 ½ dozen Ingredients: 1 (17.5oz) pkg. sugar cookie mix 1/3 cup butter, softened 1 egg Orange paste food coloring 2 oz. semisweet chocolate, melted and cooled Preparation Directions: ~ Line a 8 x 4-inch loaf pan with foil extending over sides of pan. In a medium bowl mix together cookie mix, butter, and egg until a soft dough forms. ~ On a work surface place ¾ cup of dough. Knead in the desired amount of food coloring. Press dough evenly in bottom of pan. Divide remaining dough in half. Gently press one half on top of orange layer. With the remaining dough knead in chocolate until color is uniform. Gently press evenly on top of plain dough. Refrigerate for 1 ½ to 2 hours. ~ Preheat oven to 375 degrees. Remove dough from pan, cut crosswise into ¼-inch slices. Cut each slice into 5 edges. Place wedges on ungreased cookie sheet 1-inch apart. ~ Bake 7 to 9 minutes or until cookies are golden. Cool 1 minute and remove form cookie sheets to wire rack to cool completely.
- Li-Dar Forming.Thermoforming machine, vacuum forming machine www.li- 1.With computerized and automatic PLC system it is easy operated and maintained, less out of order. 2.Motor for puling film is controlled by adapter and can be adjusted freely. It has a stable brake and a material saving feature. 3.Heater with separate controlling has the features of instantly heating, electricity saving and the heat can be evenly distributed with its adjustable and automatic back up device, high security is guaranteed. 4.The cooling water of mould set is controlled separately so to evenly cool the machine. 5.The width of the plastic material can be adjusted easily. 6.Suitable for PVC, PS, PP, PET, PE plastic material.
- Mickey VS Sephiroth An upload of an old video from FileFront to supply our medium to YouTube viewers. KH-.
- Great Steam Train Race 2010 3526, 3265 and 3642 race their respective trains through Victoria St station en route to Maitland. The Great Steam Train Race is a feature of the annual Hunter Valley Steamfest. One of the kids watching the trains apparently didn't enjoy the experience!
- Crochet Baby Bootie - Part 1 Bootie Written Instructions crochet- Booty Sole Size F Hook CH 14 Round 1: 3 SC in 2nd CH from hook. 5 SC in ea of the next 5 Ch's, 6 DC in each of the next 6 CH's. 7 DC in last chain(toe). 6 DC in ea of next 6 CH's, 5 SC in ea of next 5 CH's. 2 SC in BEG SP, JOIN. Pattern by Teresa Richardson Crochet Baby Booty Toe Decrease - Slow Motion Round 2: CH 3, 2 DC in same SP, DC in next ST, 2 DC in next ST. DC in each of the next 11 stitches. (Toe Increases) 3 DC in next ST, 1 DC, 3 DC in next ST, 1 DC, 3 DC in next ST, 1 DC, 3 DC in next. DC in each of the next 11 stitches. 2 DC in next ST, 1 DC in next, JOIN. Round 3 & 4: CH 3, in the back loop, DC in each of the next 45 ST's. (46 stitches) Cut yarn, Sew in Tails. Booty Toe Top Row 1: Work 27 DC evenly across toe with 4 decreases at toe center. Start with the decreases after about 9 double crochet. (23 double crochet after decreases) Row 2: Evenly DC across 23 DC with 3 decreases at toe center. Start the 3 decreases after 8 DC. (20 DC after the 3 decreases) Row 3: Turn baby booty wrong side out. HDC joining on opposite side with a SL ST. Booty Edging Round 1: Ch 2, 31 HDC evenly around booty opening. Round 2: CH 3 for first HDC, CH 1. Work 15 more HDC, CH 1 evenly around booty top. (16 HDC, CH 1) This will be the row for weaving in a tie or ribbon to tie booty. Round 2 & 3: CH 4, DC in same ST for first V stitch, DC, CH 1, DC in next HDC for next V stitch. V stitch in each of the remaining ...
- How to Paint Quickly and More Evenly with a Roller Home improvement expert Ron Hazelton shows how to paint quickly and evenly using a paint roller and 5-gallon bucket. Visit for more videos on painting walls, paint colors and indoor projects.
- Betty's Festive Mexican 7-Layer Dip Recipe In this video, Betty demonstrates how to assembly everyone's favorite 7-Layer Mexican Dip. Great for parties! Ingredients: 16 oz. can refried beans 2 avocados 2 teaspoons lime juice 2 chopped tomatoes (2) 4.25 oz. cans chopped or sliced ripe olives 1 cup sour cream 1/2 cup mayonnaise 1 package taco seasoning mix shredded Mexican cheese (Cheddar and Monterey jack) to taste 1 bunch green onion tips, chopped 1 package of Tostitos crispy rounds tortilla chips Spread the refried beans evenly in the bottom of a clear (approx. 9 inch by 12 inch) Pyrex dish. For the second layer, peel and mash the two avocados with the two teaspoons lime juice in a small mixing bowl. Spread this over the top of the refried beans. Wash and chop 2 tomatoes. Drain them with a colander, and then sprinkle them evenly over the avocado layer. Next, drain the ripe olives and sprinkle them over the tomato layer. At this point, mix together 1 cup of sour cream, 1/2 cup of mayonnaise and 1 package of taco seasoning mix in a small mixing bowl. Spread this mixture evenly over the ripe olive layer. Now, sprinkle the mayonnaise mixture layer generously with Mexican cheese. Finally top the 7-layer dip off with a sprinkling of the bunch of green onion tops. To serve, dip a Tostito into the dip and eat.! Enjoy!!!
twitter about evenly
Blogs & Forum
blogs and forums about evenly
“I am having trouble getting the thermal paste applied evenly, and which is possilby causing my A64 to overheat. I have trying to get paste perfectly even out with a credit card bunch of time, and seem”
— How to apply thermal paste evenly - CPUs - CPU-Components,
“Gene Frenette: Evenly matched rivals Florida, Georgia seek spark in Submitted by Gene Frenette on October 29, 2010 - 9:39pm Gene Frenette's Blog”
— Gene Frenette: Evenly matched rivals Florida, Georgia seek,
“Evenly Spaced. In The Machine is the official blog of the Inventor Product Management Team. From time to time we'll also use the blog to solicit feedback from users via surveys”
— Evenly Spaced,
“BioMarin Evenly Poised-We believe that the successful development and commercialization of the robust pipeline at BioMarin will boost the company's top line”
— BioMarin Evenly Poised - ,
“Quick, value-added, actionable stock market ideas for serious traders and investors. CSC Looks Evenly Poised - ***yst Blog. Home. News. Ratings. Ideas”
— CSC Looks Evenly Poised - ***yst Blog | ,
“We are initiating coverage on Ultra Petroleum Corp. (UPL) with a Neutral recommendation and a target price of $55. Houston, Texas-based Ultra Petroleum is an independent energy firm engaged in the acquisition, development, exploration and”
— Ultra Petroleum Evenly Poised – ***yst Blog | Stock Market,
“Voters are split over whether lawmakers should approve Wall Street reform legislation, a new poll found Thursday evening. As Democrats' financial regulatory reform bill heads to the Senate floor for debate next week, the public is evenly divided”
— Voters split evenly on financial reform bill - The Hill's,
“How to Run an Online Forum. Many people want to run an online forum. Most believe it to be as easy as opening one of the free forums found online and letting it run itself. This isn't generally true. First of all, there are”
— How to Run an Online Forum | ,
related keywords for evenly
- evenly divisible
- evenly yoked
- evenly spaced fonts
- evenly distributed
- evenly divided
- evenly space divs
- evenly matched
- evenly yoked verse
- evenly rotating economy
- evenly spaced
- evenly divisible calculator
- evenly divisible by
- evenly divisible means
- evenly divisible numbers
- evenly divisible definition
- evenly divisible by three
- evenly divisible 3
- evenly divisible 10
- evenly divisible by 4
- evenly divisible by 6 | 1 | 3 |
<urn:uuid:17cb7913-0c91-408d-9161-27362079414d> | How the Discovery of the Higgs Boson Could Break Physics
- 6:30 AM
- Categories: Physics
UPDATE: A leaked video published on CERN’s website earlier today appears to have accidentally announced the discovery of the Higgs boson ahead of the rumored official announcement scheduled for early tomorrow morning. Watch the announcement live on Wired.com beginning at 11 p.m. PT tonight (2 a.m. ET tomorrow morning).
If gossip on various physics blogs pans out, the biggest moment for physics in nearly two decades is just days away. The possible announcement on July 4 of the long-sought Higgs boson would put the last critical piece of the Standard Model of Physics in place, a crowning achievement built on a half-century of work by thousands of scientists. A moment worthy of fireworks.
But there’s a problem: The Higgs boson is starting to look just a little too ordinary.
As physicists at Europe’s Large Hadron Collider prepare to present their latest update in the hunt for the Higgs boson — the strange particle that exists everywhere in space and interacts with all other elementary particles, giving them their mass — other physicists are preparing for disappointment.
That’s because scientists have been secretly hoping all along that, when they finally found the Higgs, it would be an interesting particle with unexpected behaviors — even somewhat unruly. A perfectly well-behaved Higgs leaves less room for new, exciting physics — the kind that theorists have been wishing would show up at the LHC.
The current situation has some physicists starting to worry and, if coming years fail to turn up interesting results, the field could be headed for a crisis.
Since the mid-20th century, particle physicists have been developing a theory known as the Standard Model, which accounts for all the known forces and subatomic particles in the universe. While this model has proven time and time again to be extremely good at predicting particles and forces that were later discovered experimentally, it is not the final theory of everything. The Standard Model still has various problems that stubbornly refuse to cooperate.
Many contenders have stepped up to account for the discrepancies of the Standard Model but none has been more adored than a theory known as supersymmetry. In order to fix the Standard Model, supersymmetry posits that all known particles have a much more massive superpartner lurking in the subatomic world.
“For particle physicists, the more symmetry there is, the nicer a theory is,” said theoretical physicist Csaba Csaki of Cornell University. “So upon first seeing it, most particle physicists fell in love with [supersymmetry].”
The tricky part is that the LHC, in addition to searching for the Higgs, has also been looking for these heavy supersymmetric superpartners. But thus far, nothing is showing up. Furthermore, all indications are that scientists will find that the Higgs weighs 125 gigaelectronvolts (GeV) – or about 125 times more than a proton – which means that it sits exactly where the Standard Model expected it to be.
Great news for the troublesome Standard Model, not so much for its savior, supersymmetry.
Supersymmetry was first proposed in the 1960s and developed seriously during the heyday of particle physics in the 1970s and ‘80s. Back then, large particle accelerators were smashing subatomic particles together and discovering a slew of new bits and pieces, including quarks and the W and Z bosons. Supersymmetry was put forth as an extension of the Standard Model, but the predicted particles were out of reach for atom smashers of that era.
Before the LHC was up and running in 2010, many physicists were hopeful that it would uncover some evidence for supersymmetry. Despite a few promising results, experimental confirmation of the idea keeps failing to show up.
This has a few in the community beginning to seriously doubt their darling supersymmetry will ever be a viable theory.
“It’s a beautiful theory, and I would love it if it were true,” said particle physicist Tommaso Dorigo, who works on one of the LHC’s two main experiments. “But there is not any compelling evidence.”
For two decades, people have been claiming the supersymmetry results were just a few years away, Dorigo added. So as those few years kept coming and going with no results, physicists have tried explaining the non-appearance of these particles by making additions and elaborations to supersymmetry.
Already, the simplest versions of supersymmetry have been ruled out and a Higgs boson at 125 GeV could require even more changes, making many physicists nervous, Csaki said. Tweaking the theory to explain why even the lightest of the predicted superpartners haven’t shown up destroys some of supersymmetry’s beauty, he said.
For instance, one of the best aspects of supersymmetry is that many of its extra subatomic particles make excellent dark matter candidates. Altering supersymmetry could get rid of these potential dark matter particles, and further changes might make the theory even less useful.
“One day we may just look at it and ask if this is still the theory that we’re in love with,” Csaki said.
Of course, all is not yet lost. The LHC is still smashing particles together and, over the next few years, it will do so at higher and higher energies, perhaps finally bringing supersymmetry to light. While the accelerator will be shutdown in 2013 for repairs, 2014 and 2015 will have the machine running at its top capacity.
Many physicists are eager to see if the lightest predicted superpartner – the supersymmetric top quark, or stop squark – will show up. The stop squark is at the heart of supersymmetry and is needed to explain many properties of the Higgs. Without it, many physicists could give up on supersymmetry entirely.
“If after two years of running at high luminosity at the LHC they don’t see anything, we will be out of ideas of the conventional sort,” said Csaki. “We will be in some kind of crisis.”
While troubling, this situation doesn’t bring physics to a grinding halt. The Standard Model still has holes in it, and something needs to account for the dark matter and energy in the universe. Alternative theories to supersymmetry exist. Some require additional forces in nature, new interactions among particles, or for the Higgs boson itself to be composed of simpler pieces.
“However those models have their own problems to be a consistent models of nature,” wrote particle physicist Rahmat Rahmat from the University of Mississippi, who also works on the CMS experiment, in an email to Wired.
As yet, supersymmetry is still the front-runner for theories beyond the Standard Model and most physicists remain optimistic for its prospects.
“I’m really hopeful that besides the discovery of the Higgs, we will also soon see something else,” said Csaki.
Image: The giant detector for the CMS experiment, one of the main Higgs-searching experiments at the LHC. CMS collaboration/CERN | 1 | 2 |
<urn:uuid:88eee7a6-bbf2-4afc-bd3d-606571f0687c> | Download PDF Under the Patient Protection and Affordable Care Act (PPACA), a previously obscure government advisory body has acquired vast authority to decide which health care services Americans will have access to. The United States Preventive Services Task Force (USPSTF) was created in 1984 as a government advisor with the mission of assessing the clinical utility of preventive health measures such as screening tests and issuing nonbinding recommendations about which measures doctors should incorporate into routine medical care. PPACA gives the USPSTF’s recommendations the force of law, making them de facto mandates on which preventive services private health plans and public programs such as Medicare must pay for. Services that do not make the USPSTF grade are unlikely to be covered at all. The USPSTF was not designed to wield this kind of sweeping and binding authority. It does not maintain the transparency, deliberative process, appeal process, or requirements for public notice and comment that are hallmarks of sound regulatory policymaking. Moreover, because the USPSTF has few guidelines governing its function, it has great flexibility to adapt its criteria and grow its mandate in ways that may conflict with political goals and public sentiment and lead to unintended consequences.
Key points in this Outlook:
- Under President Obama’s health care plan, the United States Preventive Services Task Force now wields great power to decide which health services (like mammograms) doctors should provide, yet it has few checks on its sweeping authority.
- Its mandates are likely to raise health insurance costs and premiums, while reducing the number of covered preventive services.
- To improve accountability for an agency that is both out of date with the medical community and out of touch with the public, Congress should closely monitor the impact new mandates have on patient care.
In November 2009, the United States Preventive Services Task Force (USPSTF) said women age forty to forty-nine should not get routine mammograms. Almost instantly, a little known and largely marginalized government health agency was thrust onto the front pages of America’s newspapers. What proved the most controversial aspects of the mammography proposal were some of the criteria the task force had considered in reaching its decision. Among other things, the USPSTF was weighing the benefit of breast cancer screening against the burden of letting some additional cancers go undetected.
To health professionals who had championed earlier, more widespread screening and to women who had heeded that advice, the new analysis seemed callous and poorly conceived. Embedded in its analysis, the USPSTF also considered the cost of some of the newer screening modalities, such as digital mammography.1 Critics of President Obama’s new health care law quickly cited this verdict as emblematic of the rationing that the legislation would soon usher.
Of all the criticisms of PPACA, this critique was most firmly grounded in the plain language of the new statute. That is because, under PPACA, the USPSTF has acquired an expansive new mandate. Going forward, how the USPSTF grades preventive medical technologies will shape which preventive health services are covered by private health plans and public programs like Medicare.2 In short, the USPSTF’s decisions will bind much of the American health care market.
The USPSTF decision around breast cancer screening was widely rejected because it was so out of sync with other federal policy priorities. But this is not unusual, as the USPSTF has issued plenty of recommendations that have diverged with conventional clinical dogma. Many have even conflicted with advice offered by other federal agencies like the Centers for Disease Control (CDC). Previously, many of these USPSTF recommendations were simply ignored by practicing doctors. But with the passage of PPACA, the group’s rulings cannot be disregarded any longer.
Now, the USPSTF is back in the news again, having recently issued another set of controversial decisions. In the first, it recommended against routine screening for prostate cancer with a simple, cheap, and widely used blood test for PSA (an enzyme released by the prostate when the gland’s tissue becomes disrupted).3 In a separate decision, the task force recommended against screening for cervical cancer with a simple test for a virus that predisposes women to the cancer.4 In a recently released report to Congress, the USPSTF identified some of its additional “policy areas that deserve further examination.” These include screening for colon cancer and heart disease and counseling for obesity. “Evidence gaps” that it says warrant further research include “screening and treatment for depression in children, screening and counseling for alcohol misuse in adolescents, [and] aspirin use to prevent heart attacks and stroke in adults ages 80 years and older.”5
"There are plenty of reasons to believe that the current construction of the USPSTF and the way it operates leave it unsuited to discharging its new authority."
With all of these decisions, the task force is quickly becoming a household name. The Obama administration is touting the availability of free preventive services under PPACA as one of legislation’s benefits, and the USPSTF is the body charged with designating which services will be covered.6 But there are plenty of reasons to believe that the current construction of the USPSTF and the way it operates leave it unsuited to discharging its new authority.
A large problem is the lack of formal regulatory guardrails governing the USPSTF’s operations. The task force started life as an advisory body and has now become a de facto regulatory agency. But there has been too little reflection along the way on how the body is organized and discharges its mission. Whether by accident or by political design, the USPSTF has evolved into a powerful health regulatory agency, but one with few of the requirements for transparency and due process that Americans have come to demand from their regulatory bodies.
Many of these problems were on display in the USPSTF’s recent recommendations on breast cancer screening. In reaching its decision, the task force said “the harms resulting from screening for breast cancer include psychological harms, unnecessary imaging tests and biopsies in women without cancer, and inconvenience due to false-positive screening results.” When evaluating breast cancer screening, “one must also consider the harms associated with treatment of cancer that would not become clinically apparent during a woman’s lifetime . . . as well as the harms of unnecessary earlier treatment of breast cancer that would have become clinically apparent but would not have shortened a woman’s life,” the task force wrote.7
In criticizing the decision, the American Cancer Society responded that the USPSTF had arbitrarily decided that screening 1,300 women to save a life was an acceptable cost but screening 1,900 to save a life was not.8 The USPSTF has no formal rules or guidance on how it arrives at this kind of analysis. As a result, not just clinicians are now questioning how the USPSTF reaches its decisions and the merit in investing its assessments with so much political clout.9 Even the political architects of the USPSTF’s new authorities are expressing doubts.
In October 2010, when the USPSTF canceled a meeting at which it was due to recommend against prostate cancer screening for men of all ages, many figured that the Obama administration’s political leadership had intervened to avoid another round of negative headlines about the USPSTF and its new mandates on access to health care.10 In the post-PPACA age, the USPSTF is the bleeding edge of the unaccountable and largely unpredictable Washington institutions that will start intruding themselves into public’s available medical choices.
A Brief History of the USPSTF
The USPSTF was created twenty-five years ago as an independent advisory panel. It is composed of sixteen individuals with expertise in health prevention and primary care, most of whom are clinicians and academic experts. The USPSTF members are volunteers who are appointed to four-year terms by the director of the Agency for Healthcare Research and Quality (AHRQ). The USPSTF serves as an advice-giving body to that agency.
In recent years, prior to the passage of PPACA, the task force has had three main objectives. Its first mandate was to evaluate the benefits and risks of individual medical screening and diagnostic services based on age, gender, and risk factors for disease. Second, it was tasked with issuing recommendations about which preventive services should be incorporated routinely into primary medical care and for which populations. Third, it was supposed to identify a research agenda for clinical preventive care.
Initially, the group’s work was mostly advisory. Its original mission was, simply stated, to “develop recommendations for primary care clinicians on the appropriate content of periodic health examinations.”11 It was left to doctors and patients to evaluate the recommendations and decide how to best incorporate them into clinical practice.
In its individual screening recommendations, the USPSTF gives a letter grade of A (strongly recommends) through D (recommends against). These grades are based on the task force’s interpretation of the strength of medical evidence supporting a screening tool and its benefits versus clinical and (sometimes) economic costs. When the task force does not believe there is enough evidence to render a verdict, it will give a grade of I (insufficient evidence).
The USPSTF typically favors large prospective, randomized trials to validate a preventive service. But such research is generally hard to conduct for screening tests. It would require, for example, patients at risk for a particular disease to be randomly selected to either receive a screening test for the ailment or forgo the diagnostic measure. The patients would then need to be followed, sometimes for many years, to see if the tool enabled screened patients to recognize better health outcomes (and lower overall utilization of medical services) than patients who were randomly selected to forgo the screening test.
Because of the high burden of evidence that the USPSTF requires to complete its evaluation of a preventive service, it ends up issuing a majority of I recommendations.12 The fact that most preventive services have ended up with an I has had few direct implications in the past. But that is about to change as a result of PPACA and the importance the law ascribes to USPSTF ratings.
An Expanding Mission
The USPSTF’s new authority began to take shape with the Medicare Improvements for Patients and Providers Act (MIPPA), signed into law in July 2008. MIPPA shifted decisions about Medicare’s coverage of individual preventive services away from Congress to a “national coverage determination” process that is run by Medicare but heavily influenced by the USPSTF.13 Previously, Medicare did not have the legal authority to routinely add coverage for medical services aimed at prevention. So the Centers for Medicare and Medicaid Services (CMS) often had to get explicit authority from Congress to pay for new services like screening tests or wellness physicals. In some cases, it would use creative interpretations of its existing authorities to originate ways to cover some preventive services. Needless to say, regardless of the path the CMS chose, the process was long and cumbersome.
The idea of MIPPA was to make it easier for the CMS to add coverage of additional preventive services without requiring a separate act of legislation in each instance.14 Under the new process, Medicare was able to independently assume coverage of new preventive services, subject to a USPSTF determination. Starting in January 2009, the CMS was given authority to add coverage of preventive services on its own. For the CMS to add coverage, a service had to be deemed “reasonable and necessary” for the prevention or detection of an illness or disability and appropriate for Medicare beneficiaries.15 The preventive service also had to earn a grade of A or B from the USPSTF. The latter requirement gave the USPSTF a prominent role in determining what preventive service Medicare could pay for. Like other well-intentioned legislation, this measure had unintended consequences.
PPACA has extended this construct and substantially increased the USPSTF’s role by turning the discretionary arrangement into a de facto mandate. PPACA requires that health plans and insurers offering group or individual health insurance provide coverage for preventive health services with a grade of A or B and that they not impose cost-sharing requirements with respect to such services. The requirement that preventive services with a letter grade of A or B be fully covered is likely to prove costly to private insurance plans.16 Insurers previously were not covering all of these services, and when they did cover them, they often shared the costs with consumers. Now health plans will be required to cover them with no co-pays.
Ample evidence shows that mandates like these end up raising health insurance costs and premiums. In 2000, the Congressional Budget Office estimated that the marginal cost of state benefit mandates was 5–10 percent of total claims between 1990 and 1998. A 2003 Government Accountability Office study put the aggregated cost at 3–5 percent of premiums. Other estimates have put the impact of mandates as high as 20–50 percent of premiums.17
Whatever the merits of mandated benefits and first-dollar coverage of preventive services, the bottom line is that mandates will siphon premium revenue away from competing priorities. Although preventive services are an important part of comprehensive medical care, they are also costly. Dozens of separate studies have shown that prevention usually adds to medical costs instead of reducing them. Generally, about 80 percent of preventive services add more to medical costs than they save.18 Private plans forced to take on the full costs of these services will compensate by not covering services that do not get an A or B grade.19
"Far from increasing the number of preventive tests and treatments that health plans pay for, the new mandate may have the reverse effect of reducing the number of covered services."
Though the A- and B-rated services will get full coverage under PPACA, a lot of other services that do not make those grades but are currently covered (often with co-pays) may be nixed by health plans entirely. Likewise, if the USPSTF has not formally reviewed a particular preventive service, then Medicare will not need to cover it. The USPSTF chooses to review only a relatively small fraction of the preventive services available to patients, putting it fully in control of what preventive services are likely to get paid for. Far from increasing the number of preventive tests and treatments that health plans pay for, the new mandate may have the reverse effect of reducing the number of covered services.
Widespread Procedural Shortcomings
The significant and unexpected new authority conferred on the USPSTF is all the more troubling because of shortcomings in the way it operates and its limited number of expert staff. These features leave it ill equipped to discharge an increasingly complex mission. Though the task force was created as an advisory body, its new mandates require it to exercise many of the same procedures and coverage authorities as an agency like Medicare. Yet it has put in place few of the procedures routine in similar regulatory agencies to ensure transparency in its deliberations, due process for stakeholders, and mechanisms to solicit and consider input from the broader community.
For one thing, the task force maintains a largely insular process in comparison to similar regulatory functions that exist in sister public health agencies. Moreover, the deliberations and meetings of the USPSTF are not subject to the provisions of the Federal Advisory Committee Act (FACA), which means, among other things, that its proceedings are not required to be made public. The USPSTF is also not subject to the Administrative Procedures Act (APA), which governs the way in which administrative agencies of the federal government may propose and establish regulations. The act imposes requirements for transparency, interagency review, and ample opportunity for notice and comment. The APA also sets up a process for federal court review of agency actions.
The USPSTF was long able to operate without meeting these customary expectations because the body was largely advisory and often ignored. The task force would often say that it was just an advisory council with nonbinding recommendations. That argument clearly does not hold true anymore. There is a credible case that the USPSTF is now acting as an agency and its recommendations constitute final agency actions. The USPSTF is convened by AHRQ but functions as an independent, external advisory body. Until recently, AHRQ did not even publish the basis for USPSTF findings or its evidence review as a draft for public comment before it completed its recommendations. While the task force now voluntarily issues its draft recommendations for a brief period of public comment, it has no formal requirement to do so. The organization has taken admirable steps to improve its transparency and outreach in recent years, but these efforts are voluntary and still fall short of the expectations that bind other regulatory agencies.
Moreover, the task force has little capacity to vet and incorporate public comments made during this process, nor are there clear ways for people to appeal decisions. The only mechanism is for an affected party to seek a political waiver from the Secretary of Health and Human Services. The provision for this last-ditch waiver process was slipped into PPACA as compromise to address the backlash created by the USPSTF breast cancer screening decision.
The procedural shortcomings in how the USPSTF operates are amplified by the fact that it could become the first federal authority to explicitly make cost effectiveness a part of its criteria for covering health care services. Cost-effectiveness analysis is not a central part of the USPSTF’s mandate. Indeed, the members of the task force do not have expertise in this discipline, but it has been gradually worked into some of its analyses. Since the way the USPSTF goes about its work is not governed by many explicit regulations or instructions from Congress, the task force is able to unilaterally adapt its approach to meet political trends.
In 2001, the USPSTF announced it would start conducting systematic reviews of cost-effectiveness analysis to inform its recommendations. To this end, the USPSTF “initiated a process for systematically reviewing cost-effectiveness analyses as an aid in making recommendations about clinical preventive services.”20 The USPSTF stated on its website that one of its goals in using this analysis in its recommendations is to “provide substrata for policy discussions and public debate over the role of cost-effectiveness in allocating health care resources.”21
However, there are some unique problems with using cost as a basis to vet screening tests. One is that many of the economic benefits of screening tests cannot be easily measured by tabulating the direct savings from these technologies. For example, cost-benefit analyses cannot easily measure the value afforded by intangible benefits such as greater assuredness that patients gain from a negative test result. Embedding cost in the evaluation of coverage for screening tests can also create long delays in deploying new tools or adapting medical care to new discoveries. Moreover, uncovering illness early will sometimes prove more costly than letting it fester, even if early detection makes a particular disease more curable.
All of these shortcomings are complicated still more by the byzantine manner in which the USPSTF gathers the evidence to support its coverage decisions. The task force members review evidence that is independently collected for them by AHRQ staff. But that evidence collection process is largely subcontracted by the AHRQ to outside groups (mostly academic research teams). The AHRQ takes charge of compiling resulting information and passing it on the task force, but typically has little hand in generating the data. (A recent exception was the breast cancer screening decision, for which AHRQ went to great lengths to consider emerging data.) This process means that the USPSTF often has limited proximity to the origin of the data and reduced ability to actively solicit new information.
The sixteen advisors appointed to serve on the USPSTF are mostly primary care physicians, generally experts in preventive and public health. However, as clinical generalists, they rarely have deep expertise in the discrete medical disciplines in which they are asked to pass judgment, such as oncology or infectious diseases. While the USPSTF is obligated to solicit input from other clinical experts when evaluating a particular preventive service, they have no process to formally engage these experts.
The USPSTF also has nothing similar to the advisory committee process maintained by the Food and Drug Administration (FDA) or even the less formal practices used by the CMS. Proponents celebrate this insular approach, arguing that it leads to more objective decisions free from intrusion. But as we have seen in the last year with the Obama administration’s efforts to obviate controversial USPSTF decisions, the political process easily pushes around the task force. All that the insular process guarantees is that USPSTF recommendations can be out of sync with conventional medical practice and even sister health agencies.
Another problem is that the USPSTF’s criteria consistently undervalue the benefits of tests and treatments aimed at prevention, especially services aimed at secondary prevention. The USPSTF has generally failed to recognize the benefits of services used to prevent complications in older patients with established diseases (for example, coronary artery disease).22 Additionally, it has an institutional preference for issuing I ratings.
Many of the preventive services subsequently rejected by the USPSTF are officially recognized as beneficial by competing public health authorities.23 At times, this means that the USPSTF finds its negative recommendations opposed by decisions made by sister public health agencies both in the United States and abroad. In addition to the previously mentioned breast cancer screening decision, some of these conflicts have involved screening tests for HIV, prostate cancer, and hepatitis C.24
An additional problem is that the task force has been slow to incorporate new science into its recommendations. For example, in 2010, the USPSTF finally recommended aspirin for the prevention of stroke and heart attack for those at risk, decades after this practice was demonstrated to save lives and had become standard practice.25 This is going to have broad implications once the USPSTF is established as the standard for coverage decisions made by the private health plans under PPACA.
Finally, the way that the USPSTF evaluates preventive services does not adequately account for innovations in technology and medical care delivery. The delay between the establishment of new science and its incorporation into USPSTF guidelines is a function of not just the agency’s deliberative process, but its institutional design. It can take a few years for the USPSTF to issue a recommendation after it commits to a particular review and even longer for it to reconsider its prior decisions in the face of new evidence that leaves its recommendations obsolete. Therefore, the USPSTF’s authority over setting the standards for coverage of preventive services is likely to delay the incorporation of new treatment approaches into reimbursement policies.
This is compounded by the fact that evidence that the USPSTF considers is collected only periodically. There is no continual data collection or regular monitoring of evolving evidence and clinical practice trends. This sort of monitoring of current clinical standards is required at agencies like the FDA and CMS. In some cases, the USPSTF will take as many as five years to reconsider its prior recommendations. As a result of these shortcomings, the USPSTF’s recommendations can significantly lag behind the state of practice. This tortuous process disconnects the USPSTF’s findings from the current scientific evidence and the state of medical practice.
Righting a Wrong
"The least we can expect is that the USPSTF be required to operate in a way that is rigorous, transparent, and inclusive."
In an optimal world, the USPSTF would not set decisions that bind much of the public and private market for health coverage, but PPACA has already set these steps in motion. Short of opening up that legislation, the least we can expect is that the USPSTF be required to operate in a way that is rigorous, transparent, and inclusive. Congress should take the following steps to bring greater accountability and precision to this process.
First, Congress should closely monitor how the new mandates that the USPSTF imposes on health plans begin to impact coverage, access, and medical practice. There is good reason to believe that once plans are forced to cover all of the costs of the USPSTF’s A- and B-rated recommendations, these same health plans will offset those new costs by curtailing coverage for many other preventive services, even those that might be more highly valued by patients and clinicians.
Health plans will not be able to both comply with all of the USPSTF mandates (which will require first-dollar coverage for many services that presently require co-pays) and continue to offer coverage for those services that do not meet the USPSTF’s grade. Doing both will be too expensive. So the USPSTF A- and B-graded services will become both a floor and ceiling on what gets covered.
Congress also should make sure that recommendations issued by the USPSTF are in sync with sister public health agencies that have far more expertise in the domains in which they operate. These include the CDC, the National Institutes of Health, and the FDA. The USPSTF lacks the capacity of these other agencies, and as such, its analysis should not supersede their expert opinions. Congress invested these bodies with far more resources and expertise to make these judgments, and the USPSTF should not be able to displace their work.
Moreover, at the very least, the USPSTF should be subject to the Administrative Procedures Act. It is no longer functioning as an academic advisory body but instead is a full-fledged federal health agency making cost-based decisions on access to medical care that will bind the entire private marketplace. Therefore, it should be subject to all the rules that are attached to agencies that exercise these sorts of sweeping authorities.
Finally, Congress should bar the USPSTF from using cost as one criterion in establishing recommendations on preventive services. By its own admission, it does not have the requisite expertise, capacities, or regulatory traditions to exercise this authority. Its approach to making coverage decisions is opaque and insular. The decision to balance considerations of cost against clinical benefit must be made with great care. The USPSTF should not be wielding this kind of questionable authority.
The USPSTF has evolved from an expert commission to an advisory body to an independent body with all of the authority of a regulatory agency. Along the way, it has developed few of the characteristics shared by regulatory bodies. While the USPSTF has taken steps to bring more structure and transparency to its process in recent years, it still does not meet the expectations placed on sister agencies that discharge similar regulatory power. Historically, the USPSTF saw its purpose as providing users with information about the extent to which its recommendations are supported by evidence, allowing them to make more informed decisions about implementation. Now its recommendations will have regulatory force that will effectively bind clinicians by determining what their patients can be reimbursed for.
At the very least (given its expansive new authority) Congress should view the USPSTF as the regulatory authority that it has become and, in turn, subject the body to the APA. Or less appropriately, Congress could view USPSTF as an advisory committee to the government and subject the body to FACA. But given its expanding mandate, how can the USPSTF continue to be treated as a body that is neither advisory or regulatory, and exempted from all of the customary rules that govern other federal entities?
Under PPACA, a body that was once empowered only to make preventive health recommendations now has been delegated authority to create coverage requirements for private health plans. To those who feared that considerations of cost and the determinations of centralized processes could drive decision making under PPACA, the USPSTF may become a visible manifestation of these concerns. Proponents of this sort of centralized decision making may have done their policy prerogatives significant harm by allowing a group with so little procedural rigor to represent the leading edge of these kinds of prescriptions.
Scott Gottlieb, M.D., is a resident fellow at AEI.
1. For the complete summary of the USPSTF’s decision on breast cancer screening, see US Preventive Services Task Force, “Screening for Breast Cancer: Recommendation Statement,” November 2009, www.uspreventiveservicestaskforce.org/uspstf09/breastcancer/brcanrs.htm (accessed October 27, 2011).
2. PPACA also mandates coverage for other preventive recommendations, including those issued by the Advisory Committee on Immunization Practices that have been adopted by the director of the Centers for Disease Control and Prevention, the comprehensive guidelines supported by the Health Resources and Services Administration (HRSA), and HRSA’s list of recommended women’s preventive services. Note that the requirement applies only to plans created after March 23, 2010. However, plans created before that date that lose their grandfathered status will be required to cover these preventive services. See Amanda Cassidy, “Health Policy Brief: Preventive Services without Cost Sharing,” Health Affairs, December 28, 2010, www.healthaffairs.org/healthpolicybriefs/brief.php?brief_id=37 (accessed October 27, 2011).
3. Zosia Chustecka, “Recommendation against Routine PSA Screening in US,” Medscape News, October 7, 2011, www.medscape.com/viewarticle/751159 (accessed October 31, 2011).
4. Alina Selyukh, “US Health Panel Cautious on HPV Screening vs Pap,” Reuters, October 19, 2011.
5. US Preventive Services Task Force, First Annual Report to Congress on High-Priority Evidence Gaps for Clinical Preventive Services (Washington, DC: US Preventive Services Task Force, October 2011).
6. Centers for Medicare and Medicaid Services, “More People with Medicare Receiving Free Preventive Care,” news release, June 20, 2011, www.cms.gov/apps/media/press/release.asp?Counter=3987 (accessed October 28, 2011). For details on the mandated preventive services, see US Department of Health and Human Services, “Preventive Care,” n.d., www.healthcare.gov/law/features/rights/preventive-care/index.html (accessed October 28, 2011); see also Alina Selyukh, “U.S. Says Insurers Must Fully Cover Birth Control,” Reuters, August 1, 2011.
7. US Preventive Services Task Force, “Screening for Breast Cancer: US Preventive Services Task Force Recommendation Statement,” Annals of Internal Medicine 151, no. 10 (2009): 716–26.
8. Joseph Brownstein and Dan Childs, “Doctors Sound Off on New Mammogram Recommendations,” ABC News, November 18, 2009; Mark A. Helvie et al., “USPSTF Erroneously Understated Life-Years-Gained Benefit of Mammographic Screening of Women in Their 40s,” Radiology 258, no. 3 (2011): 958–59.
9. R. Edward Hendrick and Mark A. Helvie, “United States Preventive Services Task Force Screening Mammography Recommendations: Science Ignored,” American Journal of Roentgenology 196 (2011): W112–16. In their analysis, the two researchers found that having annual mammograms from age forty saved 64,889 more lives, with the current 65 percent compliance rate.
10. See, among other entries, this blog post by a former USPSTF member entitled “Mammograms and Death Panels: Why the Preventive Services Task Force Keeps Pulling Its Punches,” DrPullen.com: A Medical Blog for the Informed Patient, August 22, 2011, http://drpullen.com/uspstf (accessed October 28, 2011).
11. Office of Disease Prevention and Health Promotion, “US Preventive Services Task Force,” n.d., www.odphp.osophs.dhhs.gov/pubs/guidecps/uspstf.htm (accessed October 28, 2011).
12. Diana B. Petitti et al., “Update on the Methods of the U.S. Preventive Services Task Force: Insufficient Evidence,” Annals of Internal Medicine 150, no. 3 (2009): 199–205.
13. The Medicare Improvements for Patients and Providers Act of 2008, Public Law No: 110-275, 110th Congress, July 15, 2008. Full text of the legislation available at www.govtrack.us/congress/bill.xpd?bill=h110-6331.
14. Under its new authority, the CMS has added services such as HIV screenings as a Medicare-covered benefit.
15. Social Security Act, 42 U.S.C. §1862, 1965. Full text of the legislation available at www.ssa.gov/OP_Home/ssact/title18/1862.htm.
16. For a complete list of recommended preventive services by the USPSTF, see US Department of Health and Human Services, “Preventive Service Recommendations,” n.d., www.ahrq.gov/clinic/uspstfix.htm (accessed October 28, 2011).
17. Jonathan Gruber, “State-Mandated Benefits and Employer-Provided Health Insurance,” Journal of Public EconomicsAmerican Economic Review 79, no. 2 (May 1989): 177–83; John R. Graham, From Heart Transplants to Hairpieces: The Questionable Benefits of State Benefit Mandates for Health Insurance (San Francisco: Pacific Research Institute, July 2008), www.pacificresearch.org/docLib/20080630_Heart_to_Hair.pdf (accessed October 28, 2011).
18. Louise Russell, “Preventing Chronic Disease: An Important Investment, but Don’t Count on Cost Savings,” Health Affairs 28, no. 1 (January 2009): 42–45.
19. On July 19, 2010, the Departments of Health and Human Services, Labor, and Treasury jointly released “Interim Final Rules for Group Health Plans and Health Insurance Issuers Relating to Coverage of Preventive Services under the Patient Protection and Affordable Care Act,” Federal Register 75, no. 137 (July 19, 2010). Among other things, the rule requires group health plans and health insurers to cover certain preventive health services and to eliminate cost-sharing requirements for such services. The rule does not apply to grandfathered plans.
20. Somnath Saha et al., “The Art and Science of Incorporating Cost Effectiveness into Evidence-Based Recommendations for Clinical Preventive Services,” American Journal of Preventive Medicine 20, no. 3, supp. 1 (April 2001): 36–43.
22. Russell P. Harris et al., “Current Methods of the U.S. Preventive Services Task Force: A Review of the Process,” American Journal of Preventive Medicine 20, no. 3S (2001): 21–35.
23. S. J. Zyzanski et al., “Family Physicians’ Disagreements with the US Preventive Services Task Force Recommendations,” Journal of Family Practice 39, no. 2 (1994): 140–47.
24. Jonathan E. Rodnick, “The CDC and USPSTF Recommendations for HIV Testing,” American Family Physician 76, no. 10 (2007): 1456, 1459.
25. Doug Campos-Outcalt, “USPSTF Recommendations You May Have Missed amid the Breast Cancer Controversy,” Journal of Family Practice 59, no. 5 (2010): 276–80. 55, no. 3 (November 1994): 433–64; Lawrence H. Summers, “Some Simple Economics of Mandated Benefits,” | 1 | 2 |
<urn:uuid:312753c3-7d3c-4a9b-9734-f9526f12c50f> | Published in the Anecdotes department of IEEE Annals of the History of Computing, Vol 34, Number 1, January-March 2012, pp 4-6 , DOI 10.1109/MAHC.2012.6. My profound thanks to David Walden, Anecdotes Editor.
Applications from prior technologies are often reimplemented on new computing or communications platforms, and users sometimes don't realize that the applications have been recycled. For example, text messaging was available between computer users for years before its implementation on cell phones, and email has precursors before its implementation on the Internet.
My colleague Noel Morris and I implemented both an electronic mail command and a text messaging facility for the Massachusetts Institute of Technology's Compatible Time-Sharing System (CTSS) in 1965. The MAIL command let a system user send a text message to another user's "mail box" so the recipient could read the message later. The WRITE subcommand of the . SAVED command allowed a user to send a one-line message to another logged-in user's terminal.
Begun at the MIT Computation Center in 1961, CTSS was fairly operational the following year. By 1965, there were hundreds of registered users from MIT and other New England colleges, and CTSS provided service to up to 30 simultaneous users every day on each of the two systems on which CTSS ran -- the MIT Computation Center and Project MAC IBM 7094s. CTSS users logged into the 7094 from remote dial-up terminals and were able to store files on a disk online. This new ability encouraged users to share information in new ways.
When geographically separated CTSS users wanted to pass messages to each other, they sometimes created files with names such as TO TOM and put them in "common file" directories (which today we call folders). Recipients could log into CTSS later from any terminal, look for the files addressed to them, and print the files on the remote terminal. This method only worked between pairs of users who shared a common file directory. It relied on an ad hoc convention and had obvious privacy problems.
A more general message facility, the MAIL command, was proposed for CTSS in MIT Programming Staff Note 39, "Minimum System Documentation" by Louis Pouzin, Glenda Schroeder, and Pat Crisman. The memo has no date, but numerical sequence places it in either December 1964 or January 1965. PSN 39 proposed a facility that would let any CTSS user send text messages to any other. Each user's messages would be appended to a per-user file called MAIL BOX, which would have a "private" mode so that only the owner could read or delete messages. The proposed uses of MAIL were communication from "the system" to users informing them that files had been backed up, communication to the authors of CTSS commands with criticisms, and communication from command authors to the CTSS manual editor.
In the spring of 1965, Noel Morris and I were new members of the MIT research staff, working for the Political Science Department. When we read the PSN document about the proposed CTSS MAIL command, we asked, "Where is it?" We were told there was nobody available to write it. We wrote the MAIL command that summer. Noel saw how to use the features of the new CTSS file system to write messages into a user's mailbox file, and I wrote the code that interfaced with the user. We made a few changes from the original PSN 39 proposal during implementation. For example, to read their mailboxes, users used the PRINT command instead of a special argument to MAIL. (The CTSS manual write-up and the source code of MAIL are available online.[4,5]) Each message in a MAIL BOX was preceded by a single line showing the sending user's identification. The MAIL command was installed in Fall 1965. It did not support a message subject; carbon copies; sending to a list; fonts, color, or graphics in messages; or other improvements that became available in later mail mechanisms. Messages could only be sent to other users of the same time-sharing machine.
Our implementation of text messaging started in the spring of 1965. It was a feature of the command shell . SAVED abbreviation command, which read lines from the terminal and executed them. It could expand abbreviations in its input and iterate over lists of parameters, similar to the proposed Multics shell. It was a power user's tool, favored by programmers who frequently used CTSS. Our idea in building the program was that we could enhance the current CTSS with some of the desirable features of Multics, the next system to come.
I do not remember the date when Noel and I added the WRITE feature to . SAVED -- I think it was spring or summer of 1965. WRITE let a user send a single-line message to another logged-in user of . SAVED, employing buffer-sharing code that had been added to the CTSS supervisor the year before but never used. Users could enter lines with up to 120 characters, which were printed on the receiver's terminal when the user session resumed . SAVED. Because both the sender and receiver had to be using the abbreviation program, this facility was mostly used by CTSS power users, such as the system programming group.
We submitted these programs as user contributions to CTSS, and they gained wide acceptance. Both commands were available to all CTSS users and documented in the CTSS Programmer's Guide.[4,6]
By the beginning of the 1970s, there were more than 1,000 users of MIT's CTSS system using the system by dial-up from the MIT campus and from other, mostly academic locations. They used MAIL and WRITE to coordinate work and share information on all kinds of topics, including personal topics, just as now. None of the originally proposed three purposes of MAIL ever saw significant use. CTSS was eventually shut down on 20 July 1973.
The early 1970s were also a time of campus unrest and antiwar sentiment, which led to an early instance of spam. In 1971, a member of the system development group decided it was important to share his personal feelings with a large community of unwilling readers. Noel and I had foreseen the possibility of inappropriate mass mailings in the original MAIL command and included code to prevent it. However, then, as now, antispam measures were not always effective.
The MAIL and WRITE commands were based on analogies with older communications systems; telegraph and teletype communication had been used for many years to transfer information electrically from one person to another, and postal communication was even older.
Both commands made use of underlying CTSS features to provide their functions. MAIL used the CTSS file system and its privilege and locking mechanisms. WRITE took advantage of a message-buffering system previously added to CTSS by Robert R. Fenichel.
The interface and function of MAIL and WRITE was not based on the features of any previous systems or programs. There might have been other mail and messaging programs on other time-sharing systems, but Noel and I were unaware of them.
CTSS never moved beyond intracomputer mail and messaging because no network connections existed from either CTSS system to other computers.
When Multics development moved from CTSS to Multics, I wrote a quick and dirty "mail" command for Multics, modeled on the CTSS MAIL command. Undergraduate Bob Frankston contributed a Multics version of text messaging. Both facilities were based on shared files and were later replaced by implementations that used Multics ring protection.
The MIT Project MAC Multics machine was connected to the ARPANet in 1971. Extending mail on a single system to mail across the network was a development effort started in the early 1970s that continued into the 1990s. Early ARPANet design documents describe mail transmission between systems as a part of the file transfer protocol (FTP) subsystem.[10,11]
Email and messaging have continued to evolve since 1965, and likely most users of electronic communication today have no idea how many incremental development steps preceded their current tools.
Tom Van Vleck is an independent consultant, working in Web applications, security, cloud computing, data mining, and programming. From 1965 to 1974, he was a manager and system programmer at the Massachusetts Institute of Technology, where he participated in the Project MAC development.
Copyright (c) 2012 by IEEE, posted on author's web page by permission. | 1 | 5 |
<urn:uuid:d40ab689-2908-45ed-a1ac-5445347c0833> | The facts are still staggering, as Kathleen Thompson and Hilary Mac Austin remind us in their new book, Children of the Depression. In 1933, they write, "34 million men women and children were entirely without income. That was 28 percent of the American people then." At this same time, they continue, "a quarter of a million children were homeless . . . . at least one in five were hungry and without adequate clothing . . . [and] In some regions, especially coal-mining regions, as many as 90 percent of the children were malnourished."
Thompson's and Mac Austin's book is a testament to what children suffered during this crisis in American cultural life, when the bottom fell out, especially for the poorest in our society. Thompson and Mac Austin have brought this powerful and moving chapter of the history of American childhood back to us through the dozens of photographs that they have collected here from the enormous archives of the Farm Security Administration, the FSA. During the 1930s, the FSA actively documented the plight of children as well as their parents, in rural and small-town America, through the photographs and field notes of a corps of remarkable photographers, including Marion Post Wolcott, Lewis Hine, John Vachon, and Dorothea Lang. Their heart-breaking photographs captured both the physical and emotional states of these beleagured children and their deplorable living and working conditions. The 1930s brought us our first child labor laws, which were finally passed not because of an overwhelming desire to protect children from the harshness and brutality of the workplace, but, rather to keep children from competing with adults for scarce menial jobs. And yet, in these photos, the children continued to work: dragging cotton sacks through the dusty fields, trying to drive a plow through an eroded pasture, harvesting cauliflower and peas, hawking newspapers in the rain. Here is where those myths about hardship were in fact born, the ones that have been passed down from the grandparents and great grandparents, who really did walk to school barefoot and who really wrote the First Lady, Eleanor Roosevelt, to ask her for any old soiled dresses that might fit a seventh grader who had to stay out of school because she had nothing but rags to wear. These are unforgettable and necessary photographs; they remind us just how fragile and how brave the smallest among us always are.
Copyright © 2002 by John Cech
|Search the transcripts by date or keyword.
Wednesday, 04-Sep-2002 22:19:47 EDT | 1 | 2 |
<urn:uuid:ff834a77-2a20-4fd6-855c-85d3e14e75f1> | Thomas Sowell, prolific public intellectual and the Rose and Milton Friedman Senior Fellow at the Hoover Institution, is one of America's greatest economic thinkers and educators. He's taught the fundamentals through such books as Economic Facts and Fallacies and Basic Economics and chronicled economic history through such scholarly works as Marxism: Philosophy and Economics and On Classical Economics. In his classic work Knowledge and Decisions, he espoused a sophisticated, largely Hayekian approach, revealing how the efficient spread of relevant knowledge is shaped by our social institutions, and often warped and misshapen by government.
Now, in The Housing Boom and Bust (Basic Books), Sowell contemplates the greatest expansion of government power in a generation, which was itself occasioned by the greatest economic crisis in as long. A quick but thorough guide to the causes of the crises, Sowell's book shows how government policies led to a huge increase in highly risky housing loans. As he notes, the immense local variability in housing prices and failed loans reveals that the government mistook a set of local problems for a national one, and then imposed a single troublesome national solution. Sowell argues that while foolish decisions to indulge in complicated investment vehicles affected the specifics of how the financial contagion spread, at its root the housing problem is one of bad mortgages. And those came from bad decisions by government and by borrowers themselves.
Senior Editor Brian Doherty interviewed Sowell earlier this week about the book, the crisis, and the government’s unfortunate response.
reason: Is the economic downturn caused by the housing boom and bust the worst economic circumstance of your lifetime?
Thomas Sowell: Since I was born in 1930 the economic crisis with the most impact of my lifetime was the Great Depression. As to whether this will match that, it’s too early to tell. Right now it certainly is nothing comparable to the Great Depression, but the Great Depression began as nothing comparable to the Great Depression. For the first 12 months after the stock market crash [of 1929], unemployment never reached double digits but the solution turned out to create more disasters than the problem they were trying to solve.
Whether that will happen again depends on how far and how long the current administration will push policies to solve the present crisis and what their repercussions will be. As mentioned in the book, parallel to the 1929 crash was the stock market crash of 1987. That had the potential to create another Great Depression had Reagan followed similar policies as Hoover and FDR did. He didn’t, so we just about forgot about the stock market crash of ’87.
reason: Do you see or anticipate Obama’s reactions being sufficient to turn this downturn into another lengthy depression?
Sowell: I hope not, but what we’ve seen in these past few months is an exercise in unprecedented powers. I mean, to fire the chairman of General Motors, to tell credit card companies how they should run their business, tell GM what kind of cars it should be making, and there’s no sign of an end in sight yet. Obama’s policies are a work in progress. So a lot depends on how far he will push, but I see no signs of him turning back. I see no substantial resistance in Congress. But you never know, as things start to unfold voices of sanity may prevail.
reason: What is the most dangerous sign you’ve seen so far in terms of policy reaction to the housing bust?
Sowell: The presumption that Obama knows how all these industries ought to be operating better than people who have spent lives in those industries, and a general cockiness going back till before he was president, and the fact that he has no experience whatever in managing anything. Only someone who has never had the responsibility for managing anything could believe he could manage just about everything.
reason: You parcel out some share of responsibility for the specific way the housing bust broke down to borrowers, lenders, financial markets, and the government. What was the borrowers' share?
Sowell: There are those who borrowed to buy a place to live and speculators who borrowed to speculate, and did enormously well for a number of years. Then there were people who simply don’t understand complex mortgages, particularly people who never owned a home before and whose educations were limited. But the people I would blame the most in the sense that without their interference other problems would have been within manageable means are the politicians—people in Congress and the president and regulators—who pushed the lenders and the banks and Fannie Mae and Freddie Mac into lending and buying mortgages based on people who didn’t meet standards that evolved in the marketplace and which had worked. Those politicians, in addition to that initial mistake, ignored all sorts of warnings from all sorts of sources. As I list in the book, the Economist in London, Fortune, Barron’s, people at the American Enterprise Institute, all over the map, saw that this policy of encouraging homeownership at all costs was leading to trouble.
But the politicians clearly had as their political goal homeownership as “a good thing” and persisted—and for that matter persist to this moment in pushing it. The Federal Housing Administration last I checked was promoting supporting mortgages that have less than 4 percent down payment. We all make mistakes, but politicians have persisted in their mistakes, and in the pointing of fingers in other directions.
“Affordable housing” covers a number of things. There was this sense in Washington that the cost of buying a house had become a nationwide major problem which would require a federal answer as opposed to a local answer. All the data say that was not true. People weren’t paying a higher percent of their income nationwide for housing than they had a decade earlier. In fact, it was a somewhat lower percentage in some areas. Now in some areas, including California—coastal California—people were paying half their family income to put a roof over their head. That in turn was a result of local political people putting all sorts of restrictions on building.
Implicit in the idea of “affordable housing” is the notion that third parties know what people can afford better than those people know themselves. If you spell it out it sounds so absurd you wonder how anyone could have believed it. But for politicians the question is not, is it absurd? The question is whether or not the public will buy it.
reason: How much weight do you place on the notion that Federal Reserve expansionary money and credit policies primed the bubble, and bust, in housing?
Sowell: I find it hard to accept. I’m sure if the interest rates had been at 8 percent the boom would not have gone as far and the bust would not have been as big. I’m not saying monetary policy had no effect. But I am struck by the fact that Federal Reserve policy is nationwide, and in places like Dallas the increase in housing prices was in single digits and the decrease has been in single digits. So while Fed policy undoubtedly aggravated circumstances, it can’t be the fundamental cause because the defaults were so heavily concentrated. 60 percent of all defaults nationwide were in five states, and I suspect if you broke down the data even more you’d find specific regions in those five states very heavily implicated in defaults.
reason: What do crisis like this, and public reaction, say about general public understanding of economics?
Sowell: I think in the U.S. and in most of the world the public understanding of economics is abysmal. But it’s one thing not to understand something. I don’t understand brain surgery. It’s another to want to form policies on things on which you are ignorant. I hear the wonderful phrase “I want to make a difference” when it comes to policy. I would be horrified if I wanted to make a difference in brain surgery. The only difference is more people would die on the operating table.
The only encouraging thing about public reaction to the crisis is that going by polls citizens seem to have more misgivings about some of these policies than politicians or the media. Still, though there have been studies that indicate the New Deal prolonged the Great Depression by years, what is also clear is it was enormously popular. FDR was elected four straight times, and more than once without ever having brought unemployment down to single digits. An economic disaster does not necessarily mean a political disaster. If we could raise the average level of understanding of economics to what Alfred Marshall had in 1890, the vast majority of politicians would be voted out of office.
reason: Do you think the policies we’ve seen Obama pursue so far threaten another Great Depression-level downturn?
Sowell: I would hope not, but I certainly do think very serious consequences are likely to follow from all this, and they aren’t really discussed much. The ease with which we are now throwing the word “trillion”—I remember when billion was a shock word. To talk about trillions as though they are nickels and dimes, it’s a classic example of doing something that sounds good at the moment whose repercussions are beyond the horizon. When bad effects of his policies come, will people connect the dots? Or will Obama be able to get away with it like FDR did, blaming it all on his predecessor?
reason: What sort of reactions should the federal government have to the current situation?
Sowell: First, the government should not try to artificially keep up housing prices. The tremendous irony is that the very politicians who for years talked of affordable housing are fighting to keep housing prices from falling. How does housing become more affordable except by keeping prices down? They really have no interest in having housing become affordable by means other than their largesse.
reason: Do you think they need to be doing anything to ease the woes of people in foreclosure?
Sowell: Not at all. Foreclosure is not something that happens to you like being struck by lightning. Foreclosure is the end result of things people have done that they need to stop doing in the future. And the market can take care of that. California is one of those states where we’ve seen a drastic reduction in fancy no-money-down mortgages and all kinds of creative financing; we’ve seen those things drop sharply within just a couple of years as housing prices fell and foreclosures rose, as long as the government isn’t there to prop them up.
But though the market's reaction in California shows that borrowers and lenders can learn with market discipline, one group has not learned: politicians. Or rather they have learned a lesson, that they can get away scot free simply pointing fingers at others and making pious statements. I have no doubt Barney Frank will get reelected. But if people had any idea of the damage he’s done [by promoting the “affordable housing at all costs” policies] he’d be out of there. This stuff has happened before, though not on this scale. Republicans in the 1920s were pushing homeownership which led to increased foreclosures, and in the 1930s the Democrats did and that led to increased foreclosures. This is all a cycle, though we’re in the worst of the cycles. But politicians don’t stop doing this because they never pay any price.
And I think we see politicians today repeating one of the features of New Deal policies in that the policies seem not geared toward getting us out of our current problem as quickly as possible, but to use the problem to create enduring institutional changes to the very nature of the American economy. | 1 | 2 |
<urn:uuid:3e5b3730-0f85-41ca-a6d4-49120362da23> | With a crop of very light jets (VLJs) in development it's interesting to look back at another would be revolution in airplane design, the Lear Fan. In the late 1970s inventor and promoter Bill Lear conceived a turboprop airplane that would have twin engines driving a single propeller mounted on the tail. The airplane was made entirely from carbon graphite material which was expected to give it an unprecedented light empty weight. Maximum cruise speed was projected to be 350 knots, faster than Cessna's new Mustang light jet.
The Lear Fan garnered a pot full of orders, several prototypes flew many hours in flight test, but in the end unsolvable problems with the gearbox that combined the output of the two turbine engines, and other issues with weight and aerodynamics, doomed the project. One difference between the Lear Fan and some of today's proposed small jets is the price which was $1.6 million in 1981. That was a lot of money, but then the Lear Fan promised to do things never before possible. This story was written not long after the first prototype Lear Fan flew. FLYING 1981 Article: AIRCRAFT DESIGN: LEAR FAN BITES INTO THE BUSINESS FLEET By J. Mac McClellan
AVIATION'S NEWEST AIRPLANE is reaching out to stir up our oldest feelings. The Lear Fan 2100 has rekindled emotions not generated by big companies with engineering groups carrying out the direction of large corporate management staffs. The Lear Fan takes us back to the days when individuals put their names on airplanes and then went out to see how well they would fly and if anyone would buy them.
LearAvia headquarters in Reno, Nevada is filled with pictures of and sayings by the late William P. Lear, the inventordesigner-promoter who shaped the Lear Fan. A favorite saying, and one that best describes the Lear Fan program is, "Don't take a nibble, take the big bite." That is exactly what the people at Lear Fan have done.
The "big bite" is an attempt to build the first commercially successful twin-engine, singleprop pusher airplane, and if that isn't challenge enough, the airplane will be built entirely from composite materials. Either the pusher design or the nonmetal structure would be enough to label the Lear Fan as revolutionary; together they can only be called radical.
Lear's wife, Moya, has directed the Lear Fan project since her husband's death in 1978. Among his final instructions, it is reported, he told her, "Finish it, Mommy, finish it." Customers have plunked down hard cash to reserve 180 delivery spots and the prototype is flying. The British Government has supplied $50 million in loans and grants to assure production of the Lear Fan in Northern Ireland. All told, $100 million is committed to the project, which puts it in the big leagues. But the Lear Fan, both by design and circumstance, will always be credited to one man.
Bill Lear first advocated a twin-engine, single-propeller pusher airplane in a story in nowdefunct Skyways Magazine in 1954. Lear contended that such a design would, because of reduced weight and drag, perform better and more safely, without the possibility of asymmetric thrust if an engine failed.
Lear kept his pusher design concept on the back burner as he presided over advances in avionics, car radios and the first business jet. During the 1970s, he directed his aviation design talents to a new business jet he called the Learstar 600. By 1977, Canadair had purchased the production rights to the airplane; it enlarged the cabin and began production of the airplane, now called the Challenger.
With the Learstar gone to Canada, Lear decided the time was right for his pusher. Applying his "big bite" theory, he concluded that the airplane could be transformed from revolutionary to spectacular through use of lightweight composite materials that were now being used for parts of various aircraft. The Lear Fan 2100 began to take shape in the inventor's mind and on paper. | 1 | 2 |
<urn:uuid:80899cd2-f9f0-47ac-b2b1-0c292c6ead0e> | Television (or TV) (from the Greek tele, meaning "far," and the Latin visio, meaning "sight") is a telecommunication system for broadcasting and receiving moving pictures and sound over long distances. The term has come to refer to all aspects of the system, from the receiver set to the programming and transmission.
With the growth and influence of the television industry, it is not surprising that TV sets rank among consumer goods purchased most often, and the number of sets sold per year is used as an economic indicator. Almost every household in the United States has at least one television set. The average household might have two or three, with a set even in the bathroom.
In the future, with prices going down and new technology emerging rapidly, it is possible that someday the liquid crystal display (LCD) or plasma television could completely make the cathode ray tube (CRT) receiver sets obsolete, in much the same way as compact discs (CDs) overtook vinyl records.
The television was not invented by a single person, but by a number of scientists' advancements contributing to the ultimate all-electronic version of the invention. The origins of what would become today's television system can be traced back as far as the discovery of the photoconductivity of the element selenium by Willoughby Smith in 1873 followed by the work on the telectroscope and the invention of the scanning disk by Paul Nipkow in 1884. All practical television systems use the fundamental idea of scanning an image to produce a time series signal representation. That representation is then transmitted to a device to reverse the scanning process. The final device, the television (or T.V. set), relies on the human eye to integrate the result into a coherent image.
Electromechanical techniques were developed from the 1900s into the 1920s, progressing from the transmission of still photographs to live still duotone images to moving duotone or silhouette images, with each step increasing the sensitivity and speed of the scanning photoelectric cell. John Logie Baird gave the world's first public demonstration of a working television system that transmitted live moving images with tone graduation (grayscale) on January 26, 1926, at his laboratory in London, and built a complete experimental broadcast system around his technology. Baird further demonstrated the world's first color television transmission on July 3, 1928. Other prominent developers of mechanical television included Charles Francis Jenkins, who demonstrated a primitive television system in 1923, Frank Conrad who demonstrated a movie-film-to-television converter at Westinghouse in 1928, and Frank Gray and Herbert E. Ives at Bell Labs who demonstrated wired long-distance television in 1927 and two-way television in 1930.
Color television systems were invented and patented even before black-and-white television was working.
Completely electronic television systems relied on the inventions of Philo Taylor Farnsworth, Vladimir Zworykin and others to produce a system suitable for mass distribution of television programming. Farnsworth gave the world's first public demonstration of an all-electronic television system at the Franklin Institute in Philadelphia on August 25, 1934.
Regular broadcast programming occurred in the United States, the United Kingdom, Germany, France and the Soviet Union before World War II. The first regular television broadcasts with a modern level of definition (240 or more lines) were made in England in 1936, soon upgraded to the so-called "System A" with 405 lines.
Regular network broadcasting began in the United States in 1946, and television became common in American homes by the middle 1950s. While North American over-the-air broadcasting was originally free of direct marginal cost to the consumer (i.e., cost in excess of acquisition and upkeep of the hardware) and broadcasters were compensated primarily by receipt of advertising revenue, increasingly United States television consumers obtain their programming by subscription to cable television systems or direct-to-home satellite transmissions. In the United Kingdom, France, and most of the rest of Europe, on the other hand, operators of television equipment must pay an annual license fee, which is usually used to fund (wholly or partly) the appropriate national public service broadcasters (e.g. British Broadcasting Corporation, France Télévisions, etc.).
Elements of a television system
The elements of a simple television system are:
- An image source—this may be a camera for live pick-up of images or a flying spot scanner for transmission of films
- A sound source
- A transmitter, which modulates one or more television signals with both picture and sound information for transmission
- A receiver (television) which recovers the picture and sound signals from the television broadcast
- A display device, which turns the electrical signals into visible light
- A sound device, which turns electrical signals into sound waves to go along with the picture
Practical television systems include equipment for selecting different image sources, mixing images from several sources at once, insertion of pre-recorded video signals, synchronizing signals from many sources, and direct image generation by computer for such purposes as station identification. Transmission may be over the air from land-based transmitters, over metal or optical cables, or by radio from synchronous satellites. Digital systems may be inserted anywhere in the chain to provide better image transmission quality, reduction in transmission bandwidth, special effects, or security of transmission from reception by non-subscribers.
Thanks to advances in display technology, there are now several kinds of video displays used in modern TV sets:
- CRT (Cathode Ray Tube): The most common screens are direct-view CRTs for up to 40 inches (100 centimeters) (in 4:3) and 46 inches (115 centimeters) (in 16:9) diagonally. These are the least expensive and are a refined technology that can still provide the best value for overall picture quality. As they do not have a fixed native resolution, they are capable of displaying sources with a variety of different resolutions at the best possible image quality. The frame rate or refresh rate of a typical NTSC format CRT TV is 60 Hz, and for the PAL format, is 50 Hz. A typical NTSC broadcast signal's visible portion has an equivalent resolution of about 640 by 480 pixels. It actually could be slightly higher than that, but the Vertical Blanking Interval, or VBI, allows other signals to be carried along with the broadcast.
- Rear projection: Most very large screen TVs (up to over 100 inches (254 cm)) use projection technology. Three types of projection systems are used in projection TVs: CRT-based, LCD-based, and DLP (reflective micromirror chip) -based. Projection television has been commercially available since the 1970s, but at that time could not match the image sharpness of the CRT; current models are vastly improved, and offer a cost-effective large-screen display.
- A variation is a video projector, using similar technology, which projects onto a screen.
- Flat panel (LCD or plasma): Modern advances have brought flat panels to TV that use active matrix LCD or plasma display technology. Flat panel LCDs and plasma displays are as little as one inch thick and can be hung on a wall like a picture or put over a pedestal. Some models can also be used as computer monitors.
- LED technology has become one of the choices for outdoor video and stadium uses, since the advent of ultra high brightness LEDs and driver circuits. LEDs enable scalable ultra-large flat panel video displays that other existing technologies may never be able to match in performance.
Each has its pros and cons. Flat panel LCD displays can have narrow viewing angles and so may not suit a home environment. Rear projection screens do not perform well in natural daylight or well-lit rooms and thus are best suited to dark viewing areas. A complete rundown of the pros and cons of each display should be sought before purchasing a single television technology.
Terminology for televisions
Pixel resolution is the amount of individual points known as pixels on a given screen. A typical resolution of 720 by 480 means that the television display has 720 pixels across and 480 pixels on the vertical axis. The higher the resolution on a specified display the sharper the image. Contrast ratio is a measurement of the range between the brightest and darkest points on the screen. The higher the contrast ratio, the better looking picture there is in terms of richness, deepness, and shadow detail.
The brightness of a picture measures how vibrant and impacting the colors are. Measured in cd / m2 equivalent to the amount of candles required to power the image.
There are various bands on which televisions operate depending upon the country. The VHF and UHF signals in bands III to V are generally used. Lower frequencies do not have enough bandwidth available for television. Although the BBC initially used Band I VHF at 45 MHz, this frequency is no longer in use for this purpose. Band II is used for FM radio transmissions. Higher frequencies behave more like light and do not penetrate buildings or travel around obstructions well enough to be used in a conventional broadcast TV system, so they are generally only used for satellite broadcasting, which uses frequencies around 10 GHz. TV systems in most countries relay the video as an AM (amplitude-modulation) signal and the sound as a FM (frequency-modulation) signal. An exception is France, where the sound is AM.
Aspect ratio refers to the ratio of the horizontal to vertical measurements of a television's picture. Mechanically scanned television as first demonstrated by John Logie Baird in 1926 used a 7:3 vertical aspect ratio, oriented for the head and shoulders of a single person in close-up.
Most of the early electronic TV systems from the mid-1930s onward shared the same aspect ratio of 4:3, which was chosen to match the Academy Ratio used in cinema films at the time. This ratio was also square enough to be conveniently viewed on round cathode-ray tubes (CRTs), which were all that could be produced given the manufacturing technology of the time (today's CRT technology allows the manufacture of much wider tubes, and the flat-screen technologies which are becoming steadily more popular have no technical aspect ratio limitations at all). The BBC's television service used a more squarish 5:4 ratio from 1936 to April 3, 1950, when it too switched to a 4:3 ratio. This did not present significant problems, as most sets at the time used round tubes which were easily adjusted to the 4:3 ratio when the transmissions changed.
In the 1950s, movie studios moved towards widescreen aspect ratios such as CinemaScope in an effort to distance their product from television. Although this was initially just a gimmick, widescreen is still the format of choice today and square aspect ratio movies are rare. Some people argue that widescreen is actually a disadvantage when showing objects that are tall instead of panoramic, others say that natural vision is more panoramic than tall, and therefore widescreen is easier on the eye.
The switch to digital television systems has been used as an opportunity to change the standard television picture format from the old ratio of 4:3 (1.33:1) to an aspect ratio of 16:9 (approximately 1.78:1). This enables TV to get closer to the aspect ratio of modern widescreen movies, which range from 1.66:1 through 1.85:1 to 2.35:1. There are two methods for transporting widescreen content, the better of which uses what is called anamorphic widescreen format. This format is very similar to the technique used to fit a widescreen movie frame inside a 1.33:1 35 millimeter film frame. The image is compressed horizontally when recorded, then expanded again when played back. The anamorphic widescreen 16:9 format was first introduced via European PALPlus television broadcasts and then later on "widescreen" DVDs; the ATSC HDTV system uses straight widescreen format, no horizontal compression or expansion is used.
Recently "widescreen" has spread from television to computing where both desktop]] and laptop computers are commonly equipped with widescreen displays. There are some complaints about distortions of movie picture ratio due to some DVD playback software not taking account of aspect ratios, but this may subside as the DVD playback software matures. Furthermore, computer and laptop widescreen displays are in the 16:10 aspect ratio both physically in size and in pixel counts, and not in 16:9 of consumer televisions, leading to further complexity. This was a result of widescreen computer display engineers' uninformed assumption that people viewing 16:9 content on their computer would prefer that an area of the screen be reserved for playback controls, subtitles or their taskbar, as opposed to viewing content full-screen.
Aspect ratio incompatibility
The television industry's changing of aspect ratios is not without difficulties, and can present a considerable problem.
Displaying a widescreen aspect (rectangular) image on a conventional aspect (square or 4:3) display can be shown:
- in "letterbox" format, with black horizontal bars at the top and bottom
- with part of the image being cropped, usually the extreme left and right of the image being cut off (or in "pan and scan," parts selected by an operator)
- with the image horizontally compressed
A conventional aspect (square or 4:3) image on a widescreen aspect (rectangular with longer horizon) display can be shown:
- in “pillar box" format, with black vertical bars to the left and right
- with upper and lower portions of the image cut off (or in "tilt and scan," parts selected by an operator)
- with the image horizontally distorted
A common compromise is to shoot or create material at an aspect ratio of 14:9, and to lose some image at each side for 4:3 presentation, and some image at top and bottom for 16:9 presentation. In recent years, the cinematographic process known as Super 35 (championed by James Cameron) has been used to film a number of major movies such as Titanic, Legally Blonde, Austin Powers, and Crouching Tiger, Hidden Dragon. This process results in a camera-negative which can then be used to create both wide-screen theatrical prints, and standard full screen releases for television/VHS/DVD which avoid the need for either "letterboxing" or the severe loss of information caused by conventional "pan-and-scan" cropping.
The sound provided by television was originally was similar to monophonic radio. Original televisions that were sold to the public were a small box that showed the image and was attached to a radio. One technique for sound is called a simulcast (simultaneous broadcast) where the sound is broadcast on radio while the video is broadcast on television. Some television stations use FM band to broadcast their sound. With televisions becoming more advanced, it is now quite common to have them with built in stereo. Many televisions today have stereo jacks so people can attach amplifiers to the television for better sound.
Today there are many television add-ons including video game consoles, VCRs, set-top boxes for cable television, satellite and DVB-T compliant digital television reception, DVD players, or digital video recorders (including personal video recorders, PVRs). The add-on market continues to grow as new technologies are developed.
In the early days of television, the cabinet was made of wood grain; however, the wood grain was disappearing in the 1980s. However, there has been a modern comeback of the wood grain.
Since their inception in the U.S. in 1940, TV commercials have become one of the most effective, most pervasive, and most popular methods of selling products of many sorts, especially consumer goods. Advertising rates in the U.S. are determined primarily by Nielsen Ratings.
Getting TV programming shown to the public can happen in many different ways. After production the next step is to market and deliver the product to whatever markets are open to using it. This typically happens on two levels:
- Original Run or First Run—a producer creates a program of one or multiple episodes and shows it on a station or network which has either paid for the production itself or to which a license has been granted by the producers to do the same.
- Syndication—this is the terminology rather broadly used to describe secondary programming usages (beyond original run). It includes secondary runs in the country of first issue, but also international usage which may or may not be managed by the originating producer. In many cases other companies, TV stations or individuals are engaged to do the syndication work, in other words to sell the product into the markets they are allowed to sell into by contract from the copyright holders, in most cases the producers.
In most countries, the first wave occurs primarily on free-to-air (FTA) television, while the second wave happens on subscription TV and in other countries. In the U.S., however, the first wave occurs on the FTA networks and subscription services, and the second wave travels via all means of distribution.
First-run programming is increasing on subscription services outside the U.S., but few domestically produced programs are syndicated on domestic FTA elsewhere. This practice is increasing however, generally on digital-only FTA channels, or with subscriber-only first run material appearing on FTA.
Unlike the U.S., repeat FTA screenings of a FTA network program almost only occur on that network. Also, affiliates rarely buy or produce non-network programming that isn't centered around local events.
Almost since the medium's inception there have been charges that some programming is, in one way or another, inappropriate, offensive, or indecent. Critics such as Jean Kilborne have claimed that television, as well as other mass media images, harm the self image of young girls. Other commentators, such as Sut Jhally, make the case that television advertising in the U.S. has been so effective that happiness has increasingly come to be equated with the purchase of products. George Gerbner has presented evidence that the frequent portrayals of crime, especially minority crime, has led to the Mean World Syndrome, the view among frequent viewers of television that crime rates are much higher than the actual data would indicate. In addition, a lot of television has been charged with presenting propaganda, political or otherwise, and being pitched at a low intellectual level. Paralleling television's growing primacy in family life and society, an increasingly vocal chorus of legislators, scientists, and parents is raising objections to the uncritical acceptance of the medium.
Results of research
Fifty years of research on the impact of television on children's emotional and social development demonstrate that there are clear and lasting effects of viewing violence. In a study published in February 2006, the research team demonstrated that the brain activation patterns of children viewing violence show that children are aroused by the violence (increased heart rates), demonstrate fear (activation of the amygdala, the "fight or flight" sensor in the brain) in response to the video violence, and store the observed violence in an area of the brain (the posterior cingulate) that is reserved for long-term memory of traumatic events.
A 2002 article in Scientific American suggested that compulsive television watching, television addiction, was no different from any other addiction, a finding backed up by reports of withdrawal symptoms among families forced by circumstance to cease watching.
A longitudinal study in New Zealand involving one thousand people (from childhood to 26 years of age) demonstrated that "television viewing in childhood and adolescence is associated with poor educational achievement by 26 years of age." In other words, the more the child watched television, the less likely he or she was to finish school and enroll in a university.
In Iceland, television broadcasting hours were restricted until 1984, with no television programs being broadcast on Thursday, or during the whole of July. Also, the Swedish government imposed a total ban on advertising to children under 12 in 1991.
Despite this research, some media scholars have dismissed such studies as flawed.
In its infancy, television was an ephemeral medium. Fans of regular shows planned their schedules so that they could be available to watch their shows at their time of broadcast. The term “appointment television” was coined by marketers to describe this kind of attachment.
The viewership's dependence on schedule lessened with the invention of programmable video recorders, such as the videocassette recorder and the digital video recorder. Consumers could watch programs on their own schedule once they were broadcast and recorded. Television service providers also offer video on demand, a set of programs which could be watched at any time.
Both mobile phone networks and the Internet are capable of carrying video streams. There is already a fair amount of Internet TV available, either live or as downloadable programs.
- ↑ Halper, Donna L. “How Television Came to Boston: The Forgotten Story of W1XAY.” TVhistory.tv. Retrieved May 29, 2007.
- ↑ Layer, H. A. “Charles Francis Jenkins television station W3XK. Retrieved May 29, 2007.
- ↑ J. L. Baird: Television in 1934. Bairdtelevision.com. Retrieved May 29, 2007.
- ↑ Bleicher, Joan. Museum of Broadcast Communications: Germany. Retrieved May 29, 2007.
- ↑ 1936 German (Berlin) Olympics. TVhistory.tv. Retrieved May 29, 2007.
- ↑ Burns, R. W. Television: An International History of the Formative Years, p. 488. IET, 1998. ISBN 0852969147
- ↑ O’Neal, James. 2002. RCA's Russian Television Connection. Retrieved May 29, 2007.
- ↑ Pecora, Norma, John P. Murray and Ellen A. Wartella. Children and Television: 50 Years of Research. Lawrence Erlbaum Associates, 2006. ISBN 0805841393
- ↑ Media Psychology 8 (1): 25-37.
- ↑ Kubey, Robert and Mihaly Csikszentmihalyi. “Television Addiction is no Mere Metaphor.” Scientific American (February 23, 2002). Retrieved May 29, 2007.
- ↑ Gauntlett, David. “Ten Things Wrong With the Media 'Effects' Model.” University of Westminster. Retrieved May 29, 2007.
- Abramson, Albert. 2003. The History of Television, 1942 to 2000. Jefferson, NC: McFarland & Company. ISBN 0786412208
- Barnouw, Erik. 1975. Tube of Plenty: The Evolution of American Television. Second edition, 1990. New York: Oxford University Press. ISBN 0195064844
- Bourdieu, Pierre. 1999. On Television. New York: The New Press. ISBN 1565845129
- Brooks, Tim and Earle March. The Complete Guide to Prime Time Network and Cable TV Shows. Eighth edition, 2002. Ballantine. ISBN 0345455428
- Debord, Guy. The Society of the Spectacle. Zone Books, 1995. ISBN 0942299795
- Derrida, Jacques and Bernard Stiegler. 1996. Echographies of Television. English translation, 2002. Malden, MA: Blackwell Publishers, Inc. ISBN 074562037X
- Fisher David E. and Marshall J. Fisher. 1996. Tube: the Invention of Television. Washington DC: Counterpoint. ISBN 1887178171
- Mander, Jerry. 1977. Four Arguments for the Elimination of Television. Reprint edition, 2002. New York: HarperPerennial. ISBN 0688082742
- Postman, Neil. Amusing Ourselves to Death: Public Discourse in the Age of Show Business. Penguin USA, 1985. ISBN 0670804541
- Sigman, Aric. 2005. Remotely Controlled: How Television Is Damaging Our Lives. New edition, 2007. Random House UK. ISBN 0091906903
- Smith-Shomade, Beretta E. 2002. Shaded Lives: African-American Women and Television. Piscataway, NJ: Rutgers University Press. ISBN 0813531055
- Taylor, Alan. 2005. We, the Media: Pedagogic Intrusion into U.S. Mainstream Film and Television News Broadcasting Rhetorics. Peter Lang Academic Book Publishers. ISBN 3631518528
All retrieved May 29, 2007.
- GOOYA (UK) – A directory of world television channels
- Television History — The First 75 Years
- The Encyclopedia of Television at the Museum of Broadcast Communications
- MZTV Museum of Television – Some of the rarest sets in America
- CNET News.com's Me TV Wiki about the future of television.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | 1 | 7 |
<urn:uuid:667fb321-b4b0-4f69-a0c0-302812fcd569> | Laptops – the Sheer Convenience of Portable Computing
Computers manipulate electronic data according to a set of instructions called programs. It is this feature of computers that distinguishes them from calculators and makes them highly versatile. A computer program can include instructions that may vary from a few scores to a billion or even more. It may take years for a team of computer programmers to develop a program for an internet browser. The program may still contain some errors called bugs. Some of the earliest models of computers were as large as a room. Modern computers use integrated circuits (ICs) to perform their complex functions. These circuits have helped in developing compact and light weight variants of computers. Some modern day simple computers are small enough to be fitted into a wrist watch.
The one feature that distinguishes a computer from other machines is their ability to be programmed. It simply means that they can store a set of instructions in their memory and execute them later as per the user’s requirements. A computer’s central processing unit (CPU) directs its various components and establishes coordination in them. Technological advances have made it possible to develop portable computers. These portable computers are called laptops or notebook computers. These devices generally do not weigh more than 18 pounds or 8 kilograms. They usually sport a liquid crystal display (LCD) and a full QWERTY keyboard. Many variants of portable computers like personal digital assistants (PDAs) and internet tablets are also available in the markets.
The first true portable computer, titled GRiD Compass 1101 was developed by Bill Moggridge in 1980. Apple released Macintosh Portable in 1989. It was the company’s first computer which was powered by a battery. In 1994, IBM released its PowerPC notebook that ran on a UNIX based AIX operating system. Most present day portable computers feature a PC card, and a 30 cm active matrix display with a resolution of not less than 1024 x 768 pixels. The devices usually feature an integrated video and sound chip. Newest models of notebook computers use lithium polymer batteries which are more powerful than earlier lithium ion or nickel metal hydride batteries.
Leading manufacturers of computers like Sony, Toshiba and Acer have released several high performance laptops in the markets. VAIO is among the most successful models of Sony laptops. The word VAIO is an acronym for Video Audio Integrated Operation. The device sports Sony’s proprietary XBRITE display that provides about 1.5 times better brightness than traditional LCD displays. The device comes loaded with Windows Vista operating system. The AR series of the device was first to feature a Blu-ray disc burner. The device’s SZ series feature an Intel GMA 950 graphics chip that shows high resolution graphics.
Toshiba’s many models of notebook computers have registered good sales. Qosmio, Tecra and Portege are among the best selling models of Toshiba laptops. The Portege M400-1115E uses an Intel Core Duo T2400 processor. The device is equipped with 80 GB of hard disk drive and weighs 2.1 kg. It runs on a Windows XP Tablet Edition operating system. It sports a 12.1″ XGA display. The device has support for Bluetooth and WiFi connectivities and is available in titanium silver colour.
Acer’s Aspire series of notebooks have got wide acceptance among the users. The notebook is counted among the best selling Acer laptops. The device features a ‘Pebble’ design and is intended for use by general users. Aspire 4710 comes equipped with Gigabit LAN, 160 GB hard disk drive and 1 GB RAM. It operates on a LINUX platform. Manufacturers like Hewlett-Packard, Dell, Compaq and Lenovo have also rolled out some highly successful models of notebooks into the markets. Laptops have given new dimensions to computing. The devices have become a must have for many senior executives and professionals.
No related posts.
Related posts brought to you by Yet Another Related Posts Plugin.
Category: Laptop Problems & Solutions | 1 | 2 |
<urn:uuid:b424ae10-e2f6-44e9-b567-60d7ef675f98> | Psychopathy (//) is a personality or mental disorder characterized partly by antisocial behavior, a diminished capacity for remorse, and poor behavioral controls. As a diagnostic category in the Diagnostic and Statistical Manual of Mental Disorders, psychopathy has been replaced by antisocial personality disorder (ASPD).
While no psychiatric or psychological organization has sanctioned a diagnosis of "psychopathy" itself, assessments of psychopathy characteristics are widely used in criminal justice settings in some nations and may have important consequences for individuals. The term is also used by the general public, in popular press, and in fictional portrayals.
Although there are behavioral similarities, psychopathy and ASPD, according to criteria in the Diagnostic and Statistical Manual of Mental Disorders, are not synonymous. The diagnosis of ASPD covers two to three times as many prisoners as those that have been labeled psychopaths. Most offenders scoring high on the Hare Psychopathy Checklist (PCL-R) also pass the ASPD criteria, but most of those with ASPD do not score high on the PCL-R. Psychopaths are, despite the similar names, rarely psychotic.
The word "psychopathy" is a joining of the Greek words psyche ψυχή (soul) and pathos πάθος (suffering, feeling). The first documented use is from 1847 in Germany as psychopatisch, and the noun psychopath has been traced to 1885.
In medicine, patho- has long had a specific meaning of disease. Thus pathology has meant the study of disease since 1610, and psychopathology the study of mental disorder since 1847. A sense of "worthy to be a subject of pathology, morbid, excessive" is attested from 1845, including the phrase pathological liar from 1891 in the medical literature.
Psychosis was also used in Germany from 1841, including in a general sense of any mental derangement. The suffix -ωσις (-osis) meant in this case "abnormal condition". This term or its adjective psychotic would come to refer specifically to mental states or disorders characterized by hallucinations, delusions or being in some other sense out of touch with reality.
The term psychopathy initially had a very general meaning too, referring to all sorts of mental disorders. Some medical dictionaries still define it in the narrow and broad sense, for example MedlinePlus from the U.S. National Library of Medicine. Others, such as Stedman's Medical Dictionary, define it only as an outdated term for an antisocial type of personality disorder.
The label psychopath has been described as strangely nonspecific but probably persisting because it indicates that the source of behavior lies in the psyche rather than in the situation. The media usually uses the term to designate any criminal whose offenses are particularly abhorrent and unnatural, but that is not its original or general psychiatric meaning. In the alternative term sociopath, socio has been common in compound words since around 1880, referring to social or society.
Measurement instruments
Psychopathy Checklist
Psychopathy is most commonly assessed with the Psychopathy Checklist, Revised (PCL-R) created by psychologist Robert D. Hare. Each of the 20 items in the PCL-R is scored on a three-point scale, with a rating of 0 if it does not apply at all, 1 if there is a partial match or mixed information, and 2 if there is a reasonably good match to the offender. This is said[according to whom?] to be ideally done through a face-to-face interview together with supporting information on lifetime behavior (e.g. from case files), but is also done based only on file information. It can take up to three hours to collect and review the information.
|Factor 1||Factor 2||Other items|
Facet 1: Interpersonal
Facet 2: Affective
Facet 3: Lifestyle
Facet 4: Antisocial
The PCL-R is referred to by some as the "gold standard"[according to whom?] for assessing psychopathy. It was developed with and for criminal samples, based on the pioneer Hervey Cleckley's mid-20th century's characterization but with his positive-adjustment indicators omitted. High PCL-R scores are positively associated with measures of impulsivity and aggression, Machiavellianism, persistent criminal behavior, and negatively associated with measures of empathy and affiliation. 30 out of a maximum score of 40 is recommended as the cut-off for the label of psychopathy, although there is little scientific support for this as a particular break point. For research purposes a cut-off score of 25 is sometimes used. In fact, the UK has used a cut-off of 25 rather than the 30 used in the United States.
The PCL-R items were designed to be split in two. Factor 1 involves interpersonal or affective (emotion) personality traits and higher values are associated with narcissism and low empathy as well as social dominance and less fear or depression. Factor 2 involves either impulsive-irresponsible behaviors or antisocial behaviors and is associated with a maladaptive lifestyle including criminality. The two factors correlate with each other to some extent. Each factor is sometimes further subdivided in two - interpersonal vs affect items for Factor 1, and impulsive-irresponsible lifestyle vs antisocial behavior items for Factor 2. "Promiscuous sexual behavior" and "many short-term marital relationships" have sometimes been left out in such divisions (Hare, 2003).
Cooke and Michie have argued that a three-factor structure provide a better model than the two-factor structure. Those items from factor 2 strictly relating to antisocial behavior (criminal versatility, juvenile delinquency, revocation of conditional release, early behavioral problems, and poor behavioral controls) are removed. The remaining items are divided into three factors: Arrogant and Deceitful Interpersonal Style, Deficient Affective Experience, and Impulsive and Irresponsible Behavioral Style. Hare and colleagues have published detailed critiques of the model and argue that there are statistical and conceptual problems.
Because an individual's scores may have important consequences for his or her future, the potential for harm if the test is used or administered incorrectly is considerable. The test can only be considered valid if administered by a suitably qualified and experienced clinician under controlled conditions.
There is also a shorter version of the PCL-R, known as a screening version (PCL-SC), developed for quicker assessments of larger numbers or groups without criminal records. It has only 12 items and a maximum scores of 24 but correlates strongly with the main PCL-R. The corresponding cut-off score is 18.
Hare's concept and checklist have also been criticized. In 2010 there was controversy after it emerged Hare had threatened legal action that stopped publication of a peer-reviewed article on the PCL-R. Hare alleged the article quoted or paraphrased him incorrectly. The article eventually appeared three years later. It alleged that the checklist is wrongly viewed by many as the basic definition of psychopathy, yet it leaves out key factors, while also making criminality too central to the concept. The authors claimed this leads to problems in overdiagnosis and in the use of the checklist to secure convictions.[contradiction] Hare has clarified that he receives less than $35,000 a year from royalties associated with the checklist and its derivatives.
In addition, Hare's concept of psychopathy has been criticised as being only weakly applicable to real-world settings and tending towards tautology.[contradiction] It is also said to be vulnerable to "labeling effects"; to be over-simplistic; reductionistic; to embody the fundamental attribution error; and to not pay enough attention to context and the dynamic nature of human behavior. Some research suggests that ratings made using this system depend on the personality of the person doing the rating, including how empathic they themselves are. One forensic researcher has suggested that future studies need to examine the class background, race and philosophical beliefs of raters because they may not be aware of enacting biased judgments of people whose section of society or individual lives they have no understanding of or empathy for.[unreliable source?]
Psychopathic Personality Inventory
|PP1–1: Fearless dominance||PP1–2: Impulsive Antisociality||Coldheartedness|
Unlike the PCL, the Psychopathic Personality Inventory (PPI) was developed to comprehensively index personality traits without explicitly referring to anti-social or criminal behaviors themselves. It is a self-report scale that was developed in non-clinical samples (e.g. university students) rather than prisoners, though may be used with the latter. It was revised in 2005 to become the PPI-R (Lilienfeld & Widows) and now comprises 154 items organized into eight subscales. The item scores have been found to group into two overarching and largely separate factors (unlike the PCL-R factors), plus a third factor which is largely independent on scores on the other two:
I: Fearless dominance. From the subscales Social influence, Fearlessness, and Stress immunity. Associated with less anxiety, depression, and empathy as well as higher well-being, assertiveness, narcissism, and thrill-seeking.
II: Impulsive antisociality. From the subscales "Machiavellian" egocentricity, Rebellious nonconformity, Blame externalization, and Carefree lack of planning. Associated with impulsivity, aggressiveness, substance use, antisocial behavior, negative affect, and suicidal ideation.
III: Coldheartedness. From a subscale with the same name.
A person may score at different levels on the different factors, but the overall score indicates the extent of psychopathic personality. Factor I is associated with social efficacy while factor 2 is associated with maladaptive tendencies.
There are some traditional personality tests that contain subscales relating to psychopathy, though they assess relatively non-specific tendencies towards antisocial or criminal behavior. These include the Minnesota Multiphasic Personality Inventory (Psychopathic Deviate scale); the California Psychological Inventory (Socialization scale); and the Millon Clinical Multiaxial Inventory (Antisocial Personality Disorder scale). There is also the Levenson Self-Report Psychopathy Scale (LSRP) and the Hare Self-Report Psychopathy Scale (HSRP). However, in terms of self-report tests, the PPI/PPI-R has become the most used in modern psychopathy research on adults.
DSM and ICD
There are currently two widely established systems for classifying mental disorders — Chapter V of the International Classification of Diseases (ICD-10) produced by the World Health Organization (WHO) and the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV) produced by the American Psychiatric Association (APA). Both list categories of disorders thought to be distinct types, and have deliberately converged their codes in recent revisions so that the manuals are often broadly comparable, although significant differences remain.
The DSM has never listed psychopathy as the official term for a personality disorder, although it shares behavioral characteristics with ASPD, which is characterized by "... a pervasive pattern of disregard for, and violation of, the rights of others that begins in childhood or early adolescence and continues into adulthood", and requires three out of seven specific factors to be present. The ICD's conceptually similar diagnosis is called Dissocial personality disorder, "usually coming to attention because of a gross disparity between behaviour and the prevailing social norms, and characterized by" 3 of 6 specific issues.
Although there are behavioral similarities, ASPD according the Diagnostic and Statistical Manual of Mental Disorders criteria and psychopathy are not synonymous. A diagnosis of ASPD is based on behavioral patterns, whereas psychopathy measurements also include personality characteristics. The diagnosis of ASPD covers two to three times as many prisoners as are rated as psychopaths. Most offenders scoring high on the PCL-R also pass the ASPD criteria but most of those with ASPD do not score high on the PCL-R. Some who meet criteria for ASPD may only score high on Factor 2 of the PLC-R.
Proponents claim that the Psychopathy Checklist is better able to predict future criminality, violence, and recidivism than the diagnosis of ASPD. Hare writes that there are also differences between PLC psychopaths and others on "processing and use of linguistic and emotional information", while such differences are small between those diagnosed with ASPD and not. However, the Hare Psychopathy Checklist requires the use of a rather long interview and availability of considerable additional information as well as depending in part on judgments of character rather than observed behavior.
Hare wrote that the field trials for the DSM-IV found personality traits judgments to be as reliable as those diagnostic criteria relying only on behavior, but that the personality traits criteria were dropped in part because it was feared that the average clinician would not use them correctly. Hare criticizes the instead used DSM-IV criteria for being poorly empirically tested. In addition, the introductory text description describes the personality characteristics typical of psychopathy, which Hare argues make the manual confusing and actually containing two different sets of criteria. He has also argued that confusion regarding how to diagnose ASPD, confusion regarding the difference between ASPD and psychopathy, as well as the differing future prognoses regarding recidivism and treatability, may have serious consequences in settings such as court cases where psychopathy is often seen as aggravating the crime.
The DSM-V working party is recommending a revision of ASPD to be called antisocial/dyssocial personality disorder. There is also a suggestion to include a subtype "Antisocial/Psychopathic Type".
Other classification issues
Distinct condition or not
A crucial issue regarding the concept of psychopathy is whether it identifies a distinct condition that can be separated from other conditions and "normal" personality types, or whether it is simply a combination of scores on various dimensions of personality found throughout the population in varying combinations.
An early and influential analysis from Harris and colleagues indicated a discrete category may underlie PCL-R psychopathy, but this was only found for the behavioral Factor 2 items, indicating this analysis may be related to ASPD rather than psychopathy. Marcus, John, and Edens more recently performed a series of statistical analysis on PPI scores and concluded psychopathy may best be conceptualized as having a "dimensional latent structure" like depression.
Marcus et al. repeated the study on a larger sample of prisoners, using the PCL-R and seeking to rule out other experimental or statistical issues that may have produced the previously different findings. They again found that the psychopathy measurements do not appear to be identifying a discrete type (a taxon). They suggest that while for legal or other practical purposes an arbitrary cut-off point on trait scores might be used, there is actually no clear scientific evidence for an objective point of difference by which to call some people "psychopaths". The Hare checklist was developed for research not clinical forensic diagnosis, and even for research purposes to improve understanding of the underlying issues, it is necessary to examine dimensions of personality in general rather than only this constellation of traits.
Triarchic model
The triarchic model argues that various concepts of psychopathy can be explained by three factors:
- Boldness. Low fear including stress-tolerance, toleration of unfamiliarity and danger, and high self-confidence and social assertiveness. PCL-R measures this relatively poorly and mainly through Facet 1 of Factor 1. Similar to PPI Fearless dominance. May correspond to differences in the amygdala and other neurological systems associated with fear.
- Disinhibition. Poor impulse control including problems with planning and foresight, lacking affect and urge control, demand for immediate gratification, and poor behavioral restraints. Similar to PCL-R Factor 2 and PPI Impulsive antisociality. May correspond to impairments in frontal lobe systems that are involved in such control.
- Meanness. Lacking empathy and close attachments with others, disdain of close attachments, use of cruelty to gain empowerment, exploitative tendencies, defiance of authority, and destructive excitement seeking. PCL-R in general is related to this but in particular some elements in Factor 1. Similar to PPI Coldheartedness but also includes elements of subscales in Impulsive antisociality. Meanness may possibly be caused by either high boldness or high disinhibition combined with an adverse environment. Thus, a child with high boldness may respond poorly to punishment but may respond better to rewards and secure attachments which may not be available under adverse conditions. A child with high disinhibition may have increased problems under adverse conditions with meanness developing in response.
Psychopathy vs. sociopathy
Hare notes that sociopathy and psychopathy are often used interchangeably, but in some cases the term sociopathy is preferred because it is less likely than is psychopathy to be confused with psychoticism, whereas in other cases which term is used may "reflect the user's views on the origins and determinates of the disorder," with the term sociopathy preferred by those that see the causes as due to social factors and early environment, and the term psychopathy preferred by those who believe that there are psychological, biological, and genetic factors involved in addition to environmental factors.
Primary-secondary distinction
Several researchers have argued that there exist two variants of psychopathy. There is also empirical support for separating persons scoring high on the PCL-R into two groups that do not simply reflect Factor 1 and Factor 2. There is at least preliminary evidence of differences regarding cognition and affect as measured in laboratory tests. Different theories characterize these two variants somewhat differently. Compared to "primary" psychopaths, researchers agree that "secondary" psychopaths have more fear, anxiety, and negative emotions. They are often seen as more impulsive and with more reactive anger and aggression. Some preliminary research have suggested that secondary psychopaths may have had a more abusive childhood according to self-reports (which possibly may be inflated in secondary psychopathy), may have a higher risk of future violence, and may respond better to treatment.
Primary psychopathy has been seen as mainly due to genetic factors while secondary psychopathy has been seen as mainly due to environmental factors which also has implications for treatment possibilities. Such proposed environmental factors include an abusive childhood or a society presenting opportunities for cheating. Other researchers have argued that genetics and environment are important for both variants. David T. Lykken, using Gray's biopsychological theory of personality, have argued that primary psychopaths innately have little fear while secondary psychopaths innately have increased sensitivity to rewards. Proponents of the triarchic model described above see primary psychopaths associated with increased boldness and secondary psychopathy as associated with increased disinhibition.
Other personality dimensions
Some studies have linked psychopathy to other dimensions of personality. These include antagonism (high), conscientiousness (low) and anxiousness (low, or sometimes high). However, there are different views as to which personality dimensions are more central in regard to psychopathy, and in addition the traits are found throughout the general population in differing combinations. Some have also linked psychopathy to high psychoticism - a theorized dimension referring to tough, aggressive or hostile tendencies.
Aspects of this that appear associated with psychopathy are lack of socialization and responsibility, impulsivity, sensation-seeking in some cases, and aggression. Otto Kernberg, from a particular psychoanalytic perspective, believes psychopathy should be considered as part of a spectrum of pathological narcissism, that would range from narcissistic personality on the low end, malignant narcissism in the middle, and psychopathy at the high end. However, narcissism is generally seen as only one aspect of psychopathy as generally defined.
Cleckley checklist
In his book Mask of Sanity, Hervey M. Cleckley described 16 "common qualities" that he thought were characteristic of the individuals he termed psychopaths: Cleckley checklist formed the basis for Hare's more current PCL-R checklist (see above).
- Superficial charm and good "intelligence"
- Absence of delusions and other signs of irrational thinking
- Absence of "nervousness" or psychoneurotic manifestations
- Untruthfulness and insincerity
- Lack of remorse and shame
- Inadequately motivated antisocial behavior
- Poor judgment and failure to learn by experience
- Pathologic egocentricity and incapacity for love
- General poverty in major affective reactions
- Specific loss of insight
- Unresponsiveness in general interpersonal relations
- Fantastic and uninviting behavior with drink and sometimes without
- Suicide threats rarely carried out
- Sex life impersonal, trivial, and poorly integrated
- Failure to follow any life plan.
Cleckley also suggested there were milder forms. He ended his survey by saying "If we consider, in addition to these patients (nearly all of whom have records of the utmost folly and misery and idleness over many years and who have had to enter a psychiatric hospital), the vast number of similar people in every community who show the same behavior pattern in milder form but who are sufficiently protected and supported by relatives to remain at large, the prevalence of this disorder is seen to be appalling."
Moral judgment
Psychopaths have been considered notoriously amoral – an absence of, indifference towards, or disregard for moral beliefs. There are few firm data on patterns of moral judgment, however. Studies of developmental level (sophistication) of moral reasoning found all possible results – lower, higher or the same as non-psychopaths. Studies that compared judgments of personal moral transgressions versus judgments of breaking conventional rules or laws, found that psychopaths rated them as equally severe, whereas non-psychopaths rated the rule-breaking as less severe.
A study comparing judgments of whether personal or impersonal harm would be endorsed in order to achieve the rationally maximum (utilitarian) amount of welfare, found no significant differences between psychopaths and non-psychopaths. However, a further study using the same tests found that prisoners scoring high on the psychopathy checklist were more likely to endorse impersonal harm or rule violations than non-psychopaths were. Psychopaths who scored low in anxiety were also more willing to endorse personal harm on average.
Assessing accidents, where one person harmed another unintentionally, psychopaths judged such actions to be more morally permissible. This result is perhaps a reflection of psychopaths' failure to appreciate the emotional aspect of the victim's harmful experience, and furnishes direct evidence of abnormal moral judgment in psychopathy.
Hare and Neumann (2008) state that a large literature shows that there is at most only a weak association between psychopathy and IQ. They consider that the early pioneer Cleckley included high IQ in his checklist due to selection bias since many of his patients were "well educated and from middle-class or upper-class backgrounds" and state that "there is no obvious theoretical reason why the disorder described by Cleckley or other clinicians should be related to intelligence; some psychopaths are bright, others less so."
In addition, studies indicate that different aspects of the definition of psychopathy (e.g. interpersonal, affective (emotion), behavioral and lifestyle components) can show different links to intelligence, and it can also depend on the type of "intelligence" assessment (e.g. verbal IQ, creative, practical, analytical). Those scoring high on psychopathy measures may tend to score lower on verbal IQ.
According to R. J. R. Blair, psychopaths demonstrate impairment in stimulus-reinforced learning (whether punishment-based or reward-based). This may be due to dysfunctions in the amygdala and ventromedial prefrontal cortex. People scoring ≥25 in the Psychopathy Checklist Revised, with an associated history of violent behavior, appear to have significantly reduced microstructural integrity in their uncinate fasciculus — white matter connecting the amygdala and orbitofrontal cortex. There is DT-MRI evidence of breakdowns in the white matter connections between these two important areas.
Co-occurrence with other mental conditions
Psychopaths may have various other mental conditions. It has been found that psychopathy scores correlated with "antisocial, narcissistic, histrionic, and schizoid personality disorders ... but not neurotic disorders or schizophrenia". Additionally, the constellation of traits in psychopathy assessments overlaps considerably with ASPD criteria and also with histrionic personality disorder and narcissistic personality disorder criteria.
Psychopathy is associated with substance use disorders. This appears to be linked more closely to anti-social/criminal lifestyle, as measured by Factor 2 of the PCL-R, than the interpersonal-emotional traits assessed by Factor I of the PCL-R.
Attention deficit hyperactivity disorder (ADHD) is known to be highly comorbid with conduct disorder, and may also co-occur with psychopathic tendencies. This may be explained in part by deficits in executive function.
Anxiety disorders often co-occur with ASPD, and contrary to assumptions psychopathy can sometimes be marked by anxiety; however, this appears to be due to the antisocial aspect (factor 2 of the PCL), and anxiety may be inversely associated with the interpersonal-emotional traits (Factor I of the PCL-R).
It has been suggested that psychopathy may be comorbid with several other diagnoses than these, but limited work on comorbidity has been carried out. This may be partly due to difficulties in using inpatient groups from certain institutions to assess comorbidity, owing to the likelihood of some bias in sample selection.
The majority of crimes, including violent crimes, are committed by a small part of the population (5-7%). However, those who repeatedly commit crimes are a heterogeneous group with varying personality characteristics and psychopathy cannot be said to be the underlying type.
Correlation with criminality
The PCL-R manual state an average score of 22.1 in North American prisoners samples and that 20.5% scored 30 or higher. An analysis of prisoner samples from outside North America found a somewhat lower average value of 17.5. A diagnosis of ASPD is about two to three times as common in prisoners as a label of psychopathy is. A 2009 study by Coid et al. of a representative sample of British prisoners, unlike selected samples used in many other studies, found a prevalence of PCL-R > 30 in 7.7% of men and in 1.9% of women. Psychopathy scores "correlated with younger age, repeated imprisonment, detention in higher security, disciplinary infractions, antisocial, narcissistic, histrionic, and schizoid personality disorders, and substance misuse, but not neurotic disorders or schizophrenia." Most correlations were similar to those in other studies.
Psychopathy, as measured with the PCL-R in institutional settings, show in meta-analyses small to moderate effect sizes (r = 0.23 to 0.30) with institutional misbehavior, postrelease crime, or postrelease violent crime with similar effects for the three outcomes. Individual studies give similar results for adult offenders, forensic psychiatric samples, community samples, and youth. The PCL-R is poorer at predicting sexual re-offending.
However, this link appears to be due largely to the scale items that assess impulsive behaviors and past criminal history, which are well-established but very general risk factors. The aspects of core personality often held to be distinctively psychopathic, generally show little or no predictive link to crime by themselves. Thus Factor 1 of the PCL-R and Fearless dominance of the PPI-R have smaller or no relationship to crime, including violent crime. In contrast Factor 2 and Impulsive antisociality of the PPI-R are associated more strongly with criminality. Factor 2 has a relationship of similar strength to that of the PCL-R as a whole. The antisocial facet of the PCL-R is still predictive of future violence after controlling for past criminal behavior which, together with results regarding the PPI-R which by design does not include past criminal behavior, suggests that impulsive behaviors is an independent risk factor.
Some clinicians suggest that assessment of the construct of psychopathy does not necessarily add value to violence risk assessment. There are several other risk assessment instruments which can predict further crime with an accuracy similar to the PCL-R and some of these are considerably easier, quicker, and less expensive to administrate. This may even be done automatically by a computer simply based on data such as age, gender, number of previous convictions, and age of first conviction. Some of these assessments may also identify treatment change and goals, identify quick changes that may help short-term management, identify more specific kinds of violence that may be at risk, and may have established specific probabilities of offending for specific scores. PCL-R may continue to be popular for risk assessment because of is pioneering role and the large amount of research done using it. Although psychopathy is associated on average with an increased risk of violence, it is difficult to know how to manage the risk.
Links have been suggested that psychopaths tend to commit more "instrumental" violence than "reactive" violence. One conclusion in this regard was made by a 2002 study of homicide offenders, which reported that the homicides committed by psychopaths were almost always (93.3%) primarily instrumental, while about half (48.4%) of those committed by non-psychopaths were. However, contrary to the equating of this to mean "in cold blood", more than a third of the homicides by psychopaths involved emotional reactivity as well.
In addition, the non-psychopaths still accounted for most of the instrumental homicides, because most of these murderers were not psychopaths. In any case, FBI profilers indicate that serious victim injury is generally an emotional offense, and some research supports this, at least with regard to sexual offending. One study has found more serious offending by non-psychopaths on average than by psychopaths (e.g. more homicides versus more armed robbery and property offenses) and another that the Affective facet of PCL-R predicted reduced offense seriousness.
Sexual offending
A 2011 study of conditional releases for Canadian male federal offenders found that psychopathy was related to more violent and non-violent offences but not more sexual offences. For child molesters, psychopathy was associated with more offences. Despite "their extensive criminal histories and high recidivism rate", psychopaths showed "a great proficiency in persuading parole boards to release them into the community." It is purported that high-psychopathy offenders (both sexual and non-sexual offenders) are about 2.5 times more likely to be granted conditional release compared to non-psychopathic offenders."
Some studies have found only weak associations between psychopathy and sexual offending overall. The association is more certain for sexual violence. Psychopaths have higher sexual arousal to depictions of rape than non-psychopaths. Rapists, especially sadistic rapists, and sexual homicide offenders have a high rate of psychopathy. Some researchers have argued that psychopaths have a preference for violent sexual behavior.
One study examined the relationship between psychopathy scores and types of aggression expressed in a sample of 38 sexual murderers. 84.2% of the sample had PCL-R scores above 20 and 47.4% above 30. 82.4% of those above 30 had engaged in sadistic violence (defined as enjoyment indicated by self-report or evidence) as compared to 52.6% of those below and total PCL-R and Factor 1 scores correlated significantly with sadistic violence. In considering the challenging issue of possible reunification of some sex offenders into homes with a non-offending parent and children, it has been advised that any sex offender with a significant criminal history should be assessed on the PCL-R, and if they score 18 or higher than they should be excluded from any consideration of being placed in a home with children under any circumstances.
Other offending
Terrorists are sometimes called psychopaths, and comparisons can be drawn with traits such as antisocial violence, a selfish worldview that precludes welfare for others, lack of remorse or guilt, and blaming external events. However, such comparisons could also then be drawn more widely, for example to soldiers in wars. In addition, it has been noted that coordinated terrorist activity requires organization, loyalty and ideology; traits such as self-centeredness, unreliability, poor behavioral controls, and unusual behaviors may be disadvantages.
Recently Häkkänen-Nyholm and Nyholm (2012) have discussed the possibility of psychopathy being associated with organised crime, economic crime and war crimes.
It has been speculated that some psychopaths may be socially successful, due to factors such as low disinhibition in the triarchic model in combination with other advantages such as a favorable upbringing and good intelligence. However, there is little research on this, in part because the PCL-R does not include positive adjustment characteristics and most research have used the PCL-R on criminals. Some research using the PPI indicate that psychopathic interpersonal and affective traits/boldness and/or meanness in the triarchic model exist in noncriminals and correlate with stress immunity and stability.
Psychologists Fritzon and Board, in their study comparing the incidence of personality disorders in business executives against criminals detained in a mental hospital, found that some personality disorders were more common in the executives. They described the personality-disordered executives as "successful psychopaths" and the personality-disordered criminals as "unsuccessful psychopaths".
Sex differences
Research on psychopathy have largely been done on men and the PCL-R was developed using mainly male criminal samples raising the question how well the results apply to women. There have also been research investigating the sex differences. Men score higher than women on both the PCL-R and the PPI and on both of their main scales. The differences tend to be somewhat larger on the interpersonal-affective scale than on the antisocial scale. Most but not all studies have found broadly similar factor structure for men and women.
Many associations with other personality traits are similar although in one study the antisocial factor was more strongly related with impulsivity in men and more strongly related with openness to experience in women. It has been suggested that psychopathy in men manifest more as an antisocial pattern while it in women manifests more as a histrionic pattern. Studies on this have shown mixed results. PCL-R scores may be somewhat less predictive of violence and recidivism women. On the other hand, psychopathy may have stronger relationship with suicide and possibly internalizing symptoms in women. A suggestion is that psychopathy manifest more as externalizing behaviors in men and more as internalizing behaviors in women.
Causes and pathophysiology
Childhood and adolescent precursors
The "Psychopathy Checklist: Youth Version" (PCL:YV) is an adaptation of the PCL-R for 13–18 years old. It is, like the PCL-R, done by a trained rater based on an interview and an examination of criminal and other records. The "Antisocial Process Screening Device" (APSD) is also an adaptation of the PCL-R. It can be administered by parents or teachers for 6–13 year olds or it can be self-administered by 13–18 years olds. High psychopathy scores for both juveniles, as measured with these instruments, and adults, as measured with the PCL-R, have many similar associations with other variables. This include similar predictive ability regarding violence and criminality as well as this mainly being due to the scales measuring impulsive and antisocial behaviors rather than the scales measuring interpersonal and affective features. As for adults, several other measurement tools have similar predictive ability at risk assessment. One difference is that juvenile psychopathy appears to be associated with more negative emotionality such as anger, hostility, anxiety, and, depression. Some recent studies have also found poorer ability at predicting long-term, adult offending such as the predictive ability not being better than unaided clinical judgment in one study.
Conduct disorder is a diagnosis with similarities to ASPD. The DSM-IV allows differentiating between childhood onset before age 10 and adolescent onset at age 10 and later. Childhood onset is argued to be more due to a personality disorder caused by neurological deficits interacting with an adverse environment. For many, but not all, is childhood onset associated with what is in Terrie Moffitt's developmental theory of crime is referred to as "life-course- persistent" antisocial behavior as well as poorer health and economic status. Adolescent onset is argued to more typically be associated with short-term antisocial behavior. It has been suggested that the combination of early-onset conduct disorder and ADHD may be associated with life-course-persistent antisocial behaviors as well as psychopathy.
There is evidence that this combination is more aggressive and antisocial than those with conduct disorder alone. However, it is not particularly distinct group since the vast majority of young children with conduct disorder also have ADHD. Some evidence indicates that this group have deficits in behavioral inhibition similar to adult psychopaths. They may not be more likely than those with conduct disorder alone to have the interpersonal/affective features and the deficits in emotional processing characteristic of adult psychopaths. Proponents of different types/dimensions of psychopathy have seen this type as possibly corresponding to adult secondary psychopathy/disinhibition in the triarchic model.
The DSM-V is proposing the specifier "With Significant Callous-Unemotional Traits" which would require at least two out of four of features for at least a year: lacking of remorse/guilt, lacking empathy (callousness), lacking affect, and lacking concern for performance. It has been suggested that this is a subgroup of early onset conduct disorder distinct from the larger group by having less deficits in inhibition, less fear and anxiety, less emotional reactivity and emotional negativity, more boldness and/or meanness, less intellectual impairment, and less exposure to poor parental practices although parental practices do affect outcomes for this group. It has been argued that this group is at increased risk for future of aggressive, criminal, and other antisocial behaviors but it is unclear how much the callous-unemotinal traits contribute to this since this group also often have higher impulsivity and more prior antisocial behavior compared to children with conduct disorder without callous-unemotional traits. Proponents of different types/dimensions of psychopathy have seen this type as possibly corresponding to adult primary psychopathy/boldness in the triarchic model.
There are moderate to high correlations between psychopathy rankings from late childhood to early adolescence. The correlations are considerably lower from early- or mid-adolescence to adulthood. In one study most of the similarities were on the Impulsive- and Antisocial-Behavior scales. Of those adolescents who scored in the top 5% highest psychopathy scores at age 13, less than one third (29%) were classified as psychopathic at age 24.
Three behaviors — bedwetting, cruelty to animals and firestarting, known as the Macdonald triad — were first described by J.M. MacDonald as possible indicators, if occurring together over time during childhood, of future episodic aggressive behavior. However, subsequent research has found that bedwetting is not a significant factor and the triad as a particular profile has been called an urban legend. Questions remain about a connection between animal cruelty and later violence, though it has been included in the DSM as a possible factor in conduct disorder and later antisocial behavior.
A study by Farrington of a sample of London males followed between age 8 and 48 included studying which factors predicted scoring 10 or more on the PCL: SV at age 48. The strongest factors were "having a convicted father or mother, physical neglect of the boy, low involvement of the father with the boy, low family income, and coming from a disrupted family." Other significant factors included poor supervision, harsh discipline, large family size, delinquent sibling, young mother, depressed mother, low social class, and poor housing.
There has also been association between psychopaths and detrimental treatment by peers. Henry Lee Lucas, a serial killer and diagnosed psychopath, was bullied as a child and later said that his hatred for everyone spawned from mass social rejection.
Proponents of the triarchic model described earlier see psychopathy as due to the interaction of an adverse environment and genetic predispositions. What is adverse may differ depending on the underlying predisposition. Thus, persons having high boldness may respond poorly to punishment but may respond better to rewards and secure attachments.
One approach to studying the role of genetics for crime is to calculate the heritability coefficient. It describes the proportion of the variance that is due to genetic factors for some characteristic that differs between individuals. The non-heritability proportion can be further divided into the "shared environment" which is the non-genetic factors which make siblings similar while the "non-shared environment" is the non-genetic factors which makes siblings different from another. Studies on the personality characteristics typical of psychopathy have found moderate genetic and moderate "non-shared environmental" influences while none from the "shared environment" A study using the PPI found the two factors fearless dominance and impulsive antisociality to be similarly moderately influenced by genetics and uncorrelated with one another which indicated separate genetic influences.
Genetic factors may generally influence the development of psychopathy while environmental factors affect the specific traits that predominate.
A study on a large group of children found more than 60% heritability for "callous-unemotional traits" and that conduct problems among children with these traits had a higher heritability than among children without these traits.
Studies have suggested a connection between a variant of the monoamine oxidase A (MAO-A) gene (dubbed the "warrior gene") and psychopathy. In the variant, the allele associated with behavioral traits consists of 30 bases, and produces comparatively less MAO-A enzyme. Low MAO-A activity was found to result in a significantly increased risk of aggression and antisocial behavior.
The variant was found to vary widely in demographic prevalence among different ethnic groups. 59% of African-American men, 56% of Maori men and 54% of Chinese men carry the MOA-A 3R genetic variant, compared to 34% of Caucasians, suggesting that the former ethnic groups are more genetically predisposed by the MAO-A gene towards aggression or antisocial tendencies compared to other ethnic groups studied.
Evolutionary explanations
Psychopathy is associated with several adverse life outcomes as well as increased risk of early death due to factors such as homicides, accidents, and suicides. This in combination with the evidence for genetic influences is evolutionarily puzzling and may suggest that there are compensating evolutionary advantages. Researchers within evolutionary psychology have proposed several evolutionary explanations. Some psychopaths may possibly be very socially successful. Another is that some associated traits such as early, promiscuous, adulterous, and coercive sexuality may increase reproductive success. A third is that psychopathy represents a frequency-dependent, [clarify]. This may work as long as there are few other psychopaths in the community since more psychopaths means increasing the risk of encountering another psychopath as well as non-psychopaths likely adapting more countermeasures against cheaters.
Criticisms include that it may be better to look at the contributing personality factors rather than treat psychopathy as a unitary concept due to poor testability and a lack of empirical evidence regarding reproductive success of psychopaths. Furthermore, if psychopathy is caused by the combined effects of a very large number of adverse mutations then each mutation may have so small an effect that it escapes natural selection.
Psychopaths have in laboratory research responded differently to aversive stimuli. They have had weak conditioning to painful stimuli and poor learning of avoiding responses that causes punishment. They have had low reactivity in the autonomic nervous system as measured with skin conductance while waiting for a painful stimuli but not when the stimuli occurs. While it has been argued that the reward system functions normally, some studies have also found reduced reactivity to pleasurable stimuli.
Psychopaths have also had difficulty switching from an ongoing action despite environmental cues signaling a need to do so. One possibility is that this explains the difficulty responding to punishment although it is unclear if it can explain findings such as deficient conditioning. There may also be methodological issues regarding the research.
Several studies have found that psychopaths have difficulty identifying certain facial expressions. This has been linked to the amygdala in patients with brain damage, but a recent meta-analysis suggested the deficits are not always found in psychopathy, and tend to show more on tasks requiring verbal processing (e.g. a verbal response to a questioner) at the same time as visual processing.
Neuroimaging studies have found structural and functional differences between those scoring high and low on the PCL-R with a 2011 review by Skeem et al. stating that they are "most notably in the amygdala, hippocampus and parahippocampal gyri, anterior and posterior cingulate cortex, striatum, insula, and frontal and temporal cortex". A 2008 review by Weber et al. stated that "psychopathy is associated with brain abnormalities in a prefrontal-temporo-limbic—i.e. regions that are involved,among others, in emotional and learning processes." The amygdala and frontal areas has been suggested as particularly important. People scoring ≥25 in the Psychopathy Checklist Revised, with an associated history of violent behavior, appear to have significantly reduced microstructural integrity in their uncinate fasciculus — white matter connecting the amygdala and orbitofrontal cortex. The more extreme the psychopathy, the greater the abnormality. Psychopathic personality traits, called "acquired sociopathy" can develop due to lesions on the orbitofrontal cortex.
In a recent study of how psychopaths respond to emotional words, the right anterior temporal gyrus, whereby, wide spread differences in activation patterns have been shown across the temporal lobe when criminal psychopaths were compared to "normal" volunteers. This is consistent with the view of clinical psychology.
There is DT-MRI evidence of breakdowns in the white matter connections between these two important areas in a small British study of nine criminal psychopaths. This evidence suggests that the degree of abnormality was significantly related to the degree of psychopathy and may explain the offending behaviors.
Some of these findings are consistent with other research and theories, such as psychopaths having low fear being consistent with changes in the amygdala. However, the amygdala has been associated also with positive emotions and there has been inconsistent results in the studies regarding particular areas. Also "callous-unemtional" traits in children has been associated with changes in the amygdala but again there may be methodological issues.
Proponents of the primary-secondary psychopathy distinction and triarchic model discussed earlier argue that there are neuroscientific differences between subgroups of psychopaths supporting their views. Thus the boldness factor in the triarchic model is argued to be associated with reduced activity in the amygdala during fearful or aversive stimuli and reduced startle response while the disinhibition factor is argued to be associated with impairment of frontal lobe tasks. There is evidence that boldness and disinhibition are genetically distinguishable.
Neurotransmitters and hormones
High levels of testosterone combined with low levels of cortisol have been theorized as contributing factors. Testosterone is "associated with approach-related behavior, reward sensitivity, and fear reduction". Cortisol increases "the state of fear, sensitivity to punishment, and withdrawal behavior". Injecting testosterone "shift[s] the balance from punishment to reward sensitivity", decreases fearfulness, and increases "responding to angry faces". Some studies have found that antisocial and aggressive behaviors are associated with high testosterone levels but it is unclear if psychopaths have high testosterone levels. A few studies have found psychopathy to be linked to low cortisol levels.
High testosterone levels combined with low serotonin levels may increase violent aggression. Some research suggests that testosterone alone does not cause aggression but increases dominance-seeking behaviors. Low serotonin is associated with "impulsive and highly negative reactions" which, if combined with high testosterone, may cause aggression if an individual becomes frustrated.
Studies have indicated that individuals with the traits meeting criteria for psychopathy show a greater dopamine response to potential "rewards" such as monetary promises or taking drugs such as amphetamines. This has been theoretically linked to an increased impulsivity.
A 2010 British study found that a large 2D:4D digit ratio, an indication of high prenatal estrogen exposure, was a "positive correlate of psychopathy in females, and a positive correlate of callous affect (psychopathy sub-scale) in males".
Clinical management
Psychopathy has often been considered untreatable. Harris and Rice's Handbook of Psychopathy says that there is little evidence of a cure or effective treatment for psychopathy; no medications can instill empathy, and psychopaths who undergo traditional talk therapy might become more adept at manipulating others and more likely to commit crime. The only study finding increased criminal recidivism after treatment was in a 2011 review a retrospective study with several methodological problems on a today likely not approved treatment program in the 1960s. Some relatively rigorous quasi-experimental studies using more modern treatment methods have found improvements regarding reducing future violent and other criminal behavior, regardless of PCL-R scores, although none was a randomized controlled trial. Some other studies have found improvements in risk factors for crime such as substance abuse. No study had in a 2011 review examined if the personality traits could be changed by such treatments. It has been shown in some studies that punishment and behavior modification techniques may not improve the behavior of psychopaths.
Legal response
The PCL-R, the PCL:SV, and the PCL:YV are highly regarded and widely used in criminal justice settings in particular in North America. They may be used for risk assessment and for assessing treatment potential and be used as part of the decisions regarding bail, sentence, which prison to use, parole, and regarding whether to a youth should be tried as a juvenile or as an adult. There have been several criticisms against this. They include the general criticisms against the PCL-R, the availability of other risk assessment tools which may have advantages, and excessive pessimism regarding prognosis and treatment possibilities (see earlier sections).
The interrater reliability of the PCL-R can be high when used carefully in research but tend to be poor in applied settings. In particular Factor 1 items are somewhat subjective. In sexually violent predator cases the PCL-R scores given by prosecution experts were consistently higher than those given by defense experts in one study. The scoring may also be influenced by other differences between raters. In one study it was estimated that of the PCL-R variance, about 45% was due to true offender differences, 20% was due to which side the rater testified for, and 30% was due to other rater differences.
United Kingdom
In the United Kingdom, "Psychopathic Disorder" was legally defined in the Mental Health Act (UK) as, "a persistent disorder or disability of mind (whether or not including significant impairment of intelligence) which results in abnormally aggressive or seriously irresponsible conduct on the part of the person concerned." This term, which did not equate to psychopathy, was intended to reflect the presence of a personality disorder, in terms of conditions for detention under the Mental Health Act 1983. With the subsequent amendments to the Mental Health Act 1983 within the Mental Health Act 2007, the term "psychopathic disorder" has been abolished, with all conditions for detention (e.g. mental illness, personality disorder, etc.) now being contained within the generic term of "mental disorder".
In England and Wales, the diagnosis of dissocial personality disorder is grounds for detention in secure psychiatric hospitals under the Mental Health Act if they have committed serious crimes, but since such individuals are disruptive for other patients and not responsive to treatment this alternative to prison is not often used.
United States
"Sexual psychopath" laws
Starting in the 1930s, before the modern concept of psychopathy, "sexual psychopath" laws were introduced by some states until by the mid-1960s more than half of the states had such laws. "Sexual psychopaths" were seen as a distinct group of sex offenders who were not seriously mentally ill but had a "psychopathic personality" that could be treated. This was in agreement with the general rehabilitative trends at this time. Courts sent such sex offenders to a mental health facility for community protection and treatment.
Starting in 1970 many of these laws were modified or abolished in favor of more traditional responses such as imprisonment due to criticism of the "sexual psychopath" concept as lacking scientific evidence, the treatment being ineffective, and predictions of future offending being dubious. There were also a series of cases where persons treated and released committed new sex crimes. Starting in the 1990s several states have passed sexually dangerous person laws, not synonymous with the modern concept of psychopathy, which permit confinement after a sentence has been completed. Psychopathy measurements may be used in the confinement decision process.
A 2008 study using the Psychopathy Checklist: Screening Version (PCL: SV) found that 1.2% of a US sample scored 13 or more which indicates "potential psychopathy". Over half of those studied had scores of 0 or 1 and about two-thirds scored 2 or less. Higher scores were significantly associated with more violence, higher alcohol use, and estimated lower intelligence.
A 2009 British study by Coid et al., also using the PCL: SV, reported a community prevalence of 0.6% scoring 13 or more. The lower prevalence than in the 2008 study may be due to the 2009 sample being more representative of the general population. The scores "correlated with: younger age, male gender; suicide attempts, violent behavior, imprisonment and homelessness; drug dependence; histrionic, borderline and adult antisocial personality disorders; panic and obsessive–compulsive disorders."
PCL-R creator Robert Hare has stated that many (male) psychopaths have a pattern of mating with, and quickly abandoning women, and as a result, have a high fertility rate. These children may inherit a predisposition to psychopathy. Hare describes the implications as chilling. However, empirical evidence regarding the reproductive success of psychopaths is lacking.
Psychopathy has a long history from a very general concept including many chronic conditions that gradually narrowed to the current one. Sociopathy was introduced as an alternative name indicating a belief that the causes were due to social factors. A more modern concept was introduced by Hervey Cleckley in his influential The Mask of Sanity which was first published in 1941.
The Diagnostic and Statistical Manual of Mental Disorders was initially influenced by Cleckly's work but later changed the diagnostic criteria to overt antisocial behaviors in order to avoid more subjective judgements of personality traits. Robert Hare in 1980 introduced the influential "Psychopathy Checklist" (PCL) which was revised in 1991 (PCL-R). It is the most widely used measure of psychopathy. There are several self-report tests with "The Psychopathic Personality Inventory" (PPI) being the most popular.
Society and culture
Famous individuals are sometimes diagnosed, perhaps at a distance, as psychopaths. In a report prepared for the Office of Strategic Services in 1943, Walter C. Langer of Harvard University described Adolf Hitler as a "neurotic psychopath."
See also
- American Heritage Dictionary
- R. James R. Blair. "Neurobiological basis of psychopathy". Retrieved 2013-05-15.
- Merriam-Webster Dictionary. "Definition of psychopathy". Retrieved 2013-05-15.
- Encyclopedia of Mental Disorders. "Hare Psychopathy Checklist". Retrieved 2013-05-15.
- Skeem, J. L.; Polaschek, D. L. L., Patrick, C. J., Lilienfeld, S. O. (15 December 2011). "Psychopathic Personality: Bridging the Gap Between Scientific Evidence and Public Policy". Psychological Science in the Public Interest 12 (3): 95–162. doi:10.1177/1529100611426706.
- Patrick, Christopher J (Editor). (2005) Handbook of Psychopathy. Guilford Press. Page 61.
- Scott Lilienfeld and Hal Arkowitz 2007 What "Psychopath" Means: It is not quite what you may think Scientific American
- "Psychopathy", Online Etymology Dictionary Retrieved August 1st 2011
- Online Etymology Dictionary: Psychopathic Retrieved January 21st 2012
- Online Etymology Dictionary: Psychopath Retrieved January 21st 2012
- Online Etymology Dictionary: Pathological Retrieved January 21st 2012
- Burgy, M. (20 August 2008). "The Concept of Psychosis: Historical and Phenomenological Aspects". Schizophrenia Bulletin 34 (6): 1200–1210. doi:10.1093/schbul/sbm136. PMC 2632489. PMID 18174608.
- Medlineplus Psychopath or Psychopathy Retrieved January 21st 2012
- Medilexicon powered by Stedman's, part of Lippincott Williams & Wilkins Psychopath Retrieved January 21st 2012
- Online Etymology Dictionary: Psychopathic
- Wiktionary: Psycho Retrieved January 22nd 2012
- Lykken, David T. The Antisocial Personalities (1995).[page needed]
- Online Etymology Dictionary: Socio-
- Semple, David (2005). The Oxford Handbook of Psychiatry. USA: Oxford University Press. pp. 448–449. ISBN 0-19-852783-7.
- Cooke, David J.; Michie, Christine (2001). "Refining the construct of psychopath: Towards a hierarchical model". Psychological Assessment 13 (2): 171–88. doi:10.1037/1040-35126.96.36.199. PMID 11433793.
- Hare, Robert D.; Neumann, Craig S. (2008). "Psychopathy as a Clinical and Empirical Construct". Annual Review of Clinical Psychology 4: 217–46. doi:10.1146/annurev.clinpsy.3.022806.091452. PMID 18370617.
- Hare, R. D., & Neumann, C. N. (2006). The PCL-R Assessment of Psychopathy: Development, Structural Properties, and New Directions. In C. Patrick (Ed.), Handbook of Psychopathy (pp. 58-88). New York: Guilford.
- Hare, R. D. (2003). Manual for the Revised Psychopathy Checklist (2nd ed.). Toronto, ON, Canada: Multi-Health Systems.[page needed]
- Minkel, JR. Fear Review: Critique of Forensic Psychopathy Scale Delayed 3 Years by Threat of Lawsuit June 17, 2010
- Walters, Glenn D. (1 April 2004). "The Trouble with Psychopathy as a General Theory of Crime". International Journal of Offender Therapy and Comparative Criminology 48 (2): 133–148. doi:10.1177/0306624X03259472. PMID 15070462.
- Psychopathy: A Rorschach test for psychologists? 2011 by Karen Franklin, Ph.D. in Witness
- Miller, A. K.; Rufino, K. A., Boccaccini, M. T., Jackson, R. L., Murrie, D. C. (9 March 2011). "On Individual Differences in Person Perception: Raters' Personality Traits Relate to Their Psychopathy Checklist-Revised Scoring Tendencies". Assessment 18 (2): 253–260. doi:10.1177/1073191111402460. PMID 21393315.
- Davison, G.C., Neale, J.M., Blankstein, K.R., & Flett, G.L. (2002). Abnormal Psychology. (Etobicoke: Wiley)[page needed]
- "Psychopathy and Antisocial Personality Disorder: A Case of Diagnostic Confusion". Robert D. Hare, Ph.D. Psychiatric Times. Vol. 13 No. 2. February 1, 1996.
- Hare, R.D., Hart, S.D., Harpur, T.J. Psychopathy and the DSM—IV Criteria for Antisocial Personality Disorder (PDF).
- DSM-V revision panel T 04 Antisocial Personality Disorder (Dyssocial Personality Disorder). Retrieved January 23, 2012.
- Harris GT, Rice ME, Quinsey VL (April 1994). "Psychopathy as a taxon: evidence that psychopaths are a discrete class". Journal of Consulting and Clinical Psychology 62 (2): 387–97. doi:10.1037/0022-006X.62.2.387. PMID 8201078.
- Marcus DK, John SL, Edens JF (November 2004). "A taxometric analysis of psychopathic personality". Journal of Abnormal Psychology 113 (4): 626–35. doi:10.1037/0021-843X.113.4.626. PMID 15535794.
- Edens, John F.; Marcus, David K., Lilienfeld, Scott O., Poythress, Norman G., Jr. (1 January 2006). "Psychopathic, Not Psychopath: Taxometric Evidence for the Dimensional Structure of Psychopathy". Journal of Abnormal Psychology 115 (1): 131–144. doi:10.1037/0021-843X.115.1.131. PMID 16492104.
- Patrick, C. J.; Fowles, D. C.; Krueger, R. F. (2009). "Triarchic conceptualization of psychopathy: Developmental origins of disinhibition, boldness, and meanness". Development and Psychopathology 21 (3): 913. doi:10.1017/S0954579409000492.
- Hare, Robert D. (1999). Without Conscience: The Disturbing World of the Psychopaths Among Us. New York: Guilford Press. ISBN 1-57230-451-0.
- Millon, Theodore; Davis, Roger D. "Chapter 11: The Five-Factor Model of Personality, Psychopathy: Antisocial, Criminal, and Violent Behavior.
- Marvin Zuckerman (1991) Psychobiology of personality Cambridge University Press, p. 390. ISBN 0-521-35942-2
- Otto F., Kernberg (2004). Aggressivity, Narcissism, and Self-Destructiveness in the Psychotherapeutic Relationship: New Developments in the Psychopathology and Psychotherapy of Severe Personality Disorders. Yale University Press. ISBN 0-300-10180-5.[page needed]
- Cleckley, The Mask of Sanity: An attempt to clarify some issues about the so-called psychopathic personality pp.338-339 (5th ed.)
- Cleckley, The Mask of Sanity: An attempt to clarify some issues about the so-called psychopathic personality pp.452 (5th ed.)
- Koenigs, M.; Kruepke, M., Zeier, J., Newman, J. P. (18 July 2011). "Utilitarian moral judgment in psychopathy". Social Cognitive and Affective Neuroscience 7 (6): 708–14. doi:10.1093/scan/nsr048. PMC 3427868. PMID 21768207.
- Young, L.; Koenigs, M.; Kruepke, M.; Newman, J. (2012). "Psychopathy increases perceived moral permissibility of accidents". Journal of Abnormal Psychology 121 (in press): 659. doi:10.1037/a0027489.
- Hare, R. D.; Neumann, C. S. (2008). "Psychopathy as a Clinical and Empirical Construct". Annual Review of Clinical Psychology 4: 217–246. doi:10.1146/annurev.clinpsy.3.022806.091452. PMID 18370617.
- DeLisi, Matt; Vaughn, Michael G., Beaver, Kevin M., Wright, John Paul (NaN undefined NaN). "The Hannibal Lecter Myth: Psychopathy and Verbal Intelligence in the MacArthur Violence Risk Assessment Study". Journal of Psychopathology and Behavioral Assessment 32 (2): 169–177. doi:10.1007/s10862-009-9147-z.
- Blair, R. J. R. (2008). "The amygdala and ventromedial prefrontal cortex: Functional contributions and dysfunction in psychopathy". Philosophical Transactions of the Royal Society B: Biological Sciences 363 (1503): 2557–2565. doi:10.1098/rstb.2008.0027. PMC 2606709. PMID 18434283.
- Craig, Michael C; Marco Catani; Q Deeley; R Latham; E Daly; R Kanaan; M Picchioni; P K McGuire; T Fahy; Declan G M Murphy (2009-06-09). "Altered connections on the road to psychopathy". Molecular Psychiatry 14 (10): 946–53, 907. doi:10.1038/mp.2009.40. PMID 19506560. Retrieved 2010-07-20. Lay summary – The Times.
- Craig, M C; Catani, M; Deeley, Q; Latham, R; Daly, E; Kanaan, R; Picchioni, M; McGuire, P K et al. (2009). "Altered connections on the road to psychopathy". Molecular Psychiatry 14 (10): 946–53, 907. doi:10.1038/mp.2009.40. PMID 19506560.
- Craig MC, Catani M, Deeley Q, et al. (October 2009). "Altered connections on the road to psychopathy". Molecular Psychiatry 14 (10): 946–53, 907. doi:10.1038/mp.2009.40. PMID 19506560. Lay summary – Press Office Institute of Psychiatry, Kings College London (September 3, 2009).
- Blair, J; Mitchel D; Blair K (2005). Psychopathy, emotion and the brain. Wiley-Blackwell. pp. 25–27. ISBN 0-631-23336-9.[page needed]
- Coid, J.; Yang, M.; Ullrich, S.; Roberts, A.; Moran, P.; Bebbington, P.; Brugha, T.; Jenkins, R. et al. (2009). "Psychopathy among prisoners in England and Wales". International Journal of Law and Psychiatry 32 (3): 134–141. doi:10.1016/j.ijlp.2009.02.008. PMID 19345418.
- Nedopil, N; Hollweg, M; Hartmann, J; Jaser, R. "Comorbidity of psychopathy with major mental disorders". In Cooke, DJ; Forth, AE; Hare, RD. Psychopathy: Theory, Research and Implications for Society (NATO Science Series D). Springer. ISBN 0-7923-4919-9.[page needed]
- Smith SS, Newman JP.Alcohol and drug abuse-dependence disorders in psychopathic and nonpsychopathic criminal offenders.
- Kantor, Martin (2006). The Psychopathy of Everyday Life. p. 107. ISBN 0-275-98798-1.
- Patrick, Christopher J (Editor). (2005) Handbook of Psychopathy. Guilford Press. Pages 440-443.
- Jeremy F. Mills, Daryl G. Kroner, Robert D. Morgan Clinician's Guide to Violence Risk Assessment Psychopathic Traits pg 55. Guilford Press, 27 Oct 2010
- Yang, Min; Wong, Stephen C. P., Coid, Jeremy (1 January 2010). "The efficacy of violence prediction: A meta-analytic comparison of nine risk assessment tools". Psychological Bulletin 136 (5): 740–767. doi:10.1037/a0020473. PMID 20804235.
- Heilbrun, Kirk (2003). "Violence Risk: From Prediction to Management". Handbook of Psychology in Legal Contexts. p. 127. doi:10.1002/0470013397.ch5. ISBN 9780470013397.
- Woodworth, M.; Porter, S. (2002). "In cold blood: Characteristics of criminal homicides as a function of psychopathy". Journal of Abnormal Psychology 111 (3): 436–445. doi:10.1037//0021-843X.111.3.436. PMID 12150419.
- Porter, Stephen; Brinke, Leanne, Wilson, Kevin (1 February 2009). "Crime profiles and conditional release performance of psychopathic and non-psychopathic sexual offenders". Legal and Criminological Psychology 14 (1): 109–118. doi:10.1348/135532508X284310.
- Williams, K. M.; Cooper, B. S., Howell, T. M., Yuille, J. C., Paulhus, D. L. (1 December 2008). "Inferring Sexually Deviant Behavior From Corresponding Fantasies: The Role of Personality and Pornography Consumption". Criminal Justice and Behavior 36 (2): 198–222. doi:10.1177/0093854808327277.
- Porter, S; Woodworth, M, Earle, J, Drugge, J, Boer, D (2003 Oct). "Characteristics of sexual homicides committed by psychopathic and nonpsychopathic offenders". Law and human behavior 27 (5): 459–70. doi:10.1023/A:1025461421791. PMID 14593792.
- Jill S. Levenson, John W. Morin (2000) Treating Non-offending Parents in Child Sexual Abuse Cases p. 7 SAGE, ISBN 0-7619-2192-3
- Horgan, J. (2005) The Psychology of Terrorism. Pg 49 USA: Routledge
- Häkkänen-Nyholm, H. & Nyholm, J-O. (2012) Psychopathy and Law: A Practitioners Guide. Pg 177 UK: John Wiley & Sons
- Board, B.J. & Fritzon, Katarina, F. (2005). Disordered personalities at work. Psychology, Crime and Law, 11, 17-32
- J. M. MacDonald (1963). "The Threat to Kill". American Journal of Psychiatry 120 (2): 125–130.
- Weatherby, G. A.; Buller, D. M.; McGinnis, K. (2009). "The Buller-McGinnis model of serial-homicidal behavior: An integrated approach" (PDF). Journal of Criminology and Criminal Justice Research and Education 3: 1.
- Skrapec, C. and Ryan, K., 2010-11-16 "The Macdonald Triad: Persistence of an Urban Legend" Paper presented at the annual meeting of the ASC Annual Meeting, San Francisco Marriott, San Francisco, California
- McClellan, Janet (December 2007). "Animal Cruelty and Violent Behavior: Is There a Connection?". Journal of Security Education 2 (4): 29–45. doi:10.1300/J460v02n04_04 (inactive 2010-03-23).
- Scott, Shirley Lynn. "What Makes Serial Killers Tick?". truTV.com. Retrieved 2013-01-10.
- Glenn, A. L.; Kurzban, R.; Raine, A. (2011). "Evolutionary theory and psychopathy". Aggression and Violent Behavior 16 (5): 371. doi:10.1016/j.avb.2011.03.009.
- Caspi A, McClay J, Moffitt TE, Mill J, Martin J, Craig IW, Taylor A, Poulton R (August 2002). "Role of genotype in the cycle of violence in maltreated children". Science 297 (5582): 851–4. doi:10.1126/science.1072290. PMID 12161658. Lay summary – eurekalert.org (2002-08-01).
- Frazzetto G, Di Lorenzo G, Carola V, et al. (2007). "Early trauma and increased risk for physical aggression during adulthood: the moderating role of MAOA genotype". In Baune, Bernhard. PLoS ONE 2 (5): e486. doi:10.1371/journal.pone.0000486. PMC 1872046. PMID 17534436.
- Sabol SZ, Hu S, Hamer D (September 1998). "A functional polymorphism in the monoamine oxidase A gene promoter". Hum. Genet. 103 (3): 273–9. doi:10.1007/s004390050816. PMID 9799080.
- Rod Lea, Geoffrey Chambers, Journal of the New Zealand Medical Association. 2 March 2007. Monoamine oxidase, addiction, and the “warrior” gene hypothesis, Vol 120 No 1250
- "Maori 'warrior gene' claims appalling, says geneticist". News. NZ Herald News. Retrieved 2009-01-27.
- "Scientist debunks 'warrior gene'". News. NZ Herald News. Retrieved 2009-09-11.
- Buss, D. M. (2009). "How Can Evolutionary Psychology Successfully Explain Personality and Individual Differences?". Perspectives on Psychological Science 4 (4): 359–366. doi:10.1111/j.1745-6924.2009.01138.x.
- Wilson, K.; Juodis, M.; Porter, S. (2011). "Fear and Loathing in Psychopaths: A Meta-Analytic Investigation of the Facial Affect Recognition Deficit". Criminal Justice and Behavior 38 (7): 659. doi:10.1177/0093854811404120.
- Weber, S.; Habel, U.; Amunts, K.; Schneider, F. (2008). "Structural brain abnormalities in psychopaths—a review". Behavioral Sciences & the Law 26 (1): 7–9. doi:10.1002/bsl.802. PMID 18327824.
- Blair, James (2002). "Neurobiological basis of psychopathy". The British Journal of Psychiatry: the journal of mental science 182: 5–7. PMID 12509310.
- Pridmore, S., Chambers, A., & McArthur , M. (2005). Neuroimaging in psychopathy. Australian and New Zealand Journal of Psychiatry, 39(10), 856. doi: 10.1111/j.1440-1614.2005.01679.x
- Glenn, A. L.; Raine, A. (2008). "The Neurobiology of Psychopathy". Psychiatric Clinics of North America 31 (3): 463–475. doi:10.1016/j.psc.2008.03.004. PMID 18638646.
- Tikkanen, R.; Auvinen-Lintunen, L.; Ducci, F.; Sjöberg, R. L.; Goldman, D.; Tiihonen, J.; Ojansuu, I.; Virkkunen, M. (2011). "Psychopathy, PCL-R, and MAOA genotype as predictors of violent reconvictions". Psychiatry Research 185 (3): 382–386. doi:10.1016/j.psychres.2010.08.026. PMC 3506166. PMID 20850185.
- Beauchaine, Theodore P.; Klein, Daniel N.; Crowell, Sheila E.; Derbidge, Christina; Gatzke-Kopp, Lisa (2009). "Multifinality in the development of personality disorders: A Biology × Sex × Environment interaction model of antisocial and borderline traits". Development and Psychopathology 21 (3): 735–70. doi:10.1017/S0954579409000418. PMC 2709751. PMID 19583882.
- Gollan JK, Lee R, Coccaro EF (2005). "Developmental psychopathology and neurobiology of aggression". Development and Psychopathology 17 (4): 1151–71. doi:10.1017/S0954579405050546. PMID 16613435.
- Lee R, Coccaro ER. Neurobiology of impulsive aggression: Focus on serotonin and the orbitofrontal cortex. In: Flannery DJ, Vazsonyi AT, Waldman ID, editors. The Cambridge handbook of violent behavior and aggression. New York: Cambridge University Press; 2007. pp. 170–186. ISBN 978-0-521-60785-8
- van Goozen SH, Fairchild G, Snoek H, Harold GT (January 2007). "The evidence for a neurobiological model of childhood antisocial behavior". Psychological Bulletin 133 (1): 149–82. doi:10.1037/0033-2909.133.1.149. PMID 17201574.
- Buckholtz, Joshua W. (March 14, 2010). "Psychopaths' brains wired to seek rewards, no matter the consequences". Nature Neuroscience. Retrieved 1 October 2011.
- Blanchard a, L. M.; Lyons, M. (2010). "An investigation into the relationship between digit length ratio (2D: 4D) and psychopathy". The British Journal of Forensic Practice 12 (2): 23–31. doi:10.5042/bjfp.2010.0183.
- Harris, Grant; Rice, Marnie (2006). "Treatment of psychopathy: A review of empirical findings". In Patrick, Christopher. Handbook of Psychopathy. pp. 555–72. ISBN 9781593855918.
- Harris, Grant; Rice, Marnie (2006). "Treatment of psychopathy: A review of empirical findings". In Patrick, Christopher. Handbook of Psychopathy. pp. 555–572
- The Mental Health Act (UK) Reforming The Mental Health Act, Part II, High risk patients Accessed June 26, 2006
- Paul Harrison & John Geddes (2005-07-18). Lecture Notes: Psychiatry. Blackwell Publishing. pp. 163–165. ISBN 978-1-4051-1869-9.
- Nathan James, Kenneth R. Thomas, Cassandra Foley. Commitment of Sexually Dangerous Persons. July 2, 2007. Congressional Research Service. http://assets.opencrs.com/rpts/RL34068_20070702.pdf
- Neumann, Craig S.; Hare, Robert D. (2008). "Psychopathic traits in a large community sample: Links to violence, alcohol use, and intelligence". Journal of Consulting and Clinical Psychology 76 (5): 893–9. doi:10.1037/0022-006X.76.5.893. PMID 18837606.
- Coid J, Yang M, Ullrich S, Roberts A, Hare RD (2009). "Prevalence and correlates of psychopathic traits in the household population of Great Britain". International Journal of Law and Psychiatry 32 (2): 65–73. doi:10.1016/j.ijlp.2009.01.002. PMID 19243821.
- Langer, Walter C. (1972) . The Mind of Adolf Hitler: The Secret Wartime Report. New York: Basic Books. p. 126. ISBN 978-0-465-04620-1.
- Blair, J. et al. (2005) The Psychopath - Emotion and the Brain. Malden, MA: Blackwell Publishing, ISBN 978-0-631-23335-0
- Cleckley, Hervey M. The Mask of Sanity: An Attempt to Reinterpret the So-Called Psychopathic Personality, 5th Edition, revised 1984, PDF file download.
- Hare, Robert D. (1999). Without Conscience: The Disturbing World of the Psychopaths Among Us. New York: Guilford Press. ISBN 1-57230-451-0.
- Paul Babiak & Robert D. Hare. Snakes in Suits: When Psychopaths Go To Work. HarperCollins, New York, NY. ISBN 978-0-06-114789-0
- Häkkänen-Nyholm, H. & Nyholm, J-O. (2012). Psychopathy and Law: A Practitioners Guide. Chichester: John Wiley & Sons.
- Oakley, Barbara, Ph.D., Evil Genes: Why Rome Fell, Hitler Rose, Enron Failed, and My Sister Stole My Mother's Boyfriend. Prometheus Books, Amherst, NY, 2007, ISBN 1-59102-665-2.
- Michael H. Thimble, F.R.C.P., F.R.C. Psych. Psychopathology of Frontal Lobe Syndromes.
- Widiger, Thomas (1995). Personality Disorder Interview-IV, Chapter 4: Antisocial Personality Disorder. Psychological Assessment Resources, Inc. ISBN 0-911907-21-1.
- Dutton, K. (2012) The Wisdom of Psychopaths ISBN 9780374709105 (e-book)
|Look up psychopathy in Wiktionary, the free dictionary.|
- Handbook of Psychopathy (2007) on Google Books.
- The Mask of Sanity, 5th Edition, PDF of Cleckley's book, 1988
- Without Conscience Official web site of Dr. Robert Hare
- Psychopathy in Psychiatry and Philosophy: An Annotated Bibliography Malatesti, L
- Understanding The Psychopath: Key Definitions & Research
- The Paradox of Psychopathy Psychiatric Times, 2007 (nb: inconsistent access)
- Into the Mind of a Killer Nature, 2001
- Can A Test Really Tell Who's A Psychopath? NPR audio, text and expert panel report, 2011
- What Psychopaths Teach Us about How to Succeed Scientific American, October 2012 | 1 | 18 |
<urn:uuid:be12cb81-bab0-4a4e-b987-cbeed3b48403> | SALT PRODUCTION [and "secret salt' ]
Continuous and reliable supplies
of Salt, were a matter of such importance that the establishment of early settlements, the
rise and decay of civilisations, demographic shifts of populations and the development of
agriculture, were intimately related to the immediate availability of salt.
The power to
control a population's salt supply, was power over life and death. Erratic sea level
changes, particularly in the Mediterranean where the minimal tide was
relied upon to fill coastal evaporation pans, prevented some of these civilisations from
obtaining consistent salt supplies, causing them to migrate or decay, conquor
[peat] - Solar pan evaporation - Rock mining - Saltpeter -
Tinder - Gunpowder - The East India Company
-Glass - Leather - table salt -
Salt is physiologically
absolutely necessary for human life, but in the past [prior to the Industrial Revolution]
the known mineral salt sources were so limited that its supply was a
critical demographic power factor for most communities.
It was only available as visible
and exposed rock outcrops in arid regions, or as dried out salt cake on the shores of some
seas and salt lakes. In areas with wet climates, the protruding salt dissolved making it
almost impossible to discover. It is probably this, more than for any
other reason, that many of the great civilisations first developed near deserts and desert
climates, for example the Mediterranean region, at the edges of the "arid"
Solar evaporation on vast flat coastal areas was
considerably easier, than manually quarrying and hacking at rock salt. Though the
technology was not easy and was handed down from generation to generation. A large share
of the world's consumption of salt - is still made by the ancient methods of trapping
seawater or salt spring brines, and evaporating the brine and concentrating the salt,
either artificially, or under the sun's heat.
U.S. salt production has become
more efficient. At the time of the U.S. Civil War, 3,000 workers
produced over 225,000 tons of salt in the United States. Today, there is
a third more workers, but they produce 100 times more salt.
China has more than doubled its production in the last
Worldwide, salt production
tracks consumption which, in turn, reflects population growth (food
salt) and industrial development (chemical salt, salt for animal
nutrition, roadway safety, water conditioning, etc. Salt production
globally was about 250 million tons in 2006
see Salt Institute statistics
Table Historic Salt production per man employed [known figures]
Period Locality (tons) Men employed Method ofProduction One man makes per year (tons)
1900 Taodeni(Sahara) 4000 250 primitive mining 16
1900 Coserra(Italy) 6000 250 primitive mining 24
1890 Sicily 17,000 400 primitive mining 43
1660 Tirol(Austria) 12,000 250 (+300 for gathering wood) brining 48 (22)
1700 Rhe(France) 4000 250 solar 16
1960 Reichenhall(Germany) 100,000 400 brining 250
coastal salt manufacture depended on the availability of wide flat coastal areas and in
making clever use of natural, shallow depressions, lagoons or manmade salterns about 15-20
cms deep. These had to be positioned at mid-tide level to facilitate filling [without
industrial pumps], the evaporation pans in order to concentrate the brine, and later
for harvesting and drying the precipitated crystallised salt.
Chinese technology included drilling into a salt deposit,
with at least two holes one to feed and flood fresh water into the salt diaper, and
the second hole to allow the water to 'well' up after dissolving the salt, into the
evaporation pans, where it could be again concentrated by evaporation. Evaporation would
then occur either by solar heat, or by manual boiling using convenient fuel for burning.
- Drilling for salt circa 400 AD - to depths of 3000 feet"
||"Chinese salt pan
production - still used today.
Yellow River - the great bend."
Town Drilling rigs and wells
It takes approximately, 50000
cubic/m of sea water spread over 100000 sq/m, of flat solar evaporation area, to produce
1000 tons of salt a year. There are two other important conditions:
- # an equable climate with a warm breeze and a hot sun
- # a reasonably steady sealevel. [The Mediterranean tide
fluctuates only a few centimetres, whereas the oceans may have tides of more than than
were abundant when the sea was one or two metres below its present level; they became
comparatively rare with a sea level of one or two metres higher, Even in Delta areas of
the some rivers, like the Nile, Rhone, and the Euphrates, the establishment of new pond
levels was very difficult, and the cycling of the brines from one level of concentration
to the next could take months and years .
change of climate conditions, or a minor ocean fluctuation could have had a serious effect
for ancient coastal saltmaking. Manual rock mining or inland salt springs or lakes
like the Dead Sea suddenly became the only alternative.
Fuel needed for production of 1 ton salt
||Source of fueland salt
||brining and wood gathering on 100,000 sq/m
woodland per year
||England East coast
||Vacuum pan - compression still
||100 sq/km of flat impermiable pan area at ocean
Among other important
concepts to move the huge quantities of brine, and technical innovations developed in
producing salt, were the measurement of density and pumping by screw, both ascribed to
Archimedes. The first use of impellers and sail type
windmills was to operate an Archimedean screw and chain pumps for controlling water and
brine flows. To move the huge quantities brine from the sea into the evaporation pans,
controlled flooding by tides was the only answer.
In regions where solar
energy could not be used the salt maker was forced to use solid fuel to 'boil' brines. In
northern Europe, [PEAT PRODUCTION OF SALT] - and
also wood. Whole forests were devoured leaving only the stumps.
In Japan it was seaweed.
||HALLE Germany - boiling pan model
||Luneburg Salt well before 1500 - In 1569
the well was converted to a pumping system Up to the year 1569 the brine was won out of
the "Sod" (well pit) with a barrel-like jar called "Öseammer". The
workers of the salt works called "Sodeskumpane" had to lift up the jars with
their own muscle power. A kind of seesaw called "Sodrute" served as a lever.
When visitors come upstairs from the basement they stand in front of a faithful copy of
such a "Sodrute" with a "Öseammer" hanging from it
Solar pan Evaporation
Crystallisation of salt
in solar pans in hot climates, occurs naturally. The crystals first form on the surface of
the brine. As they become soaked - the evaporating surface brine reaches saturation
point before the cooler lower layers.
Salt mushrooms growing
in the Dead Sea - nucleation occurs
quickly upon anything protruding from the brine
Additional crystals grow beside these crystals which become
partially submerged, rather than below them, or above them, and a typical
"funnel" or wedge shaped form of crystals takes shape. The familiar salt
"mushrooms" can be seen growing in salt lagoons and pans.
Manual harvesting in Morocco
The specific gravity of a Sodium Chloride crystal is 2.16.
and the saturated brine at 25 C contains 26.7% salt. and has a specific gravity of 1.2004.
At 15 C a saturated solution contains 26.5% salt, and has a specific gravity 1.203. Hence
a solution saturated at a higher temperature is specifically lighter, even though it
contains a greater quantity of salt. It is this explanation that allowed salt makers to
crystallise "blocks" or briquettes of salt on
the surfaces of ponds, using floating elements such as sticks and straws to
form the crusts of salt.
. It should be noted that with most other substances,
crystallisation can not occur at the the solution surface because their solubility
increases more rapidly than their specific gravity decreases. [see Jewish Salt Technology - Religion]
enclosing solar evaporation ponds and pyramids of raked-up salt
At the end of the last Ice Age, around 17-15000 BC, the
ice sheets covering the earth's surface began to retreat, flooding the continental shelf
where many early populations seem to have lived. The average mean sea level rise was about
1 metre a century, the most rapid period being betwen 8,000 and 5,000 BC By about 2000 BC
the oceans had recovered from the Ice Age lowlevel, risen again and had probably reached a
metre or so above today's level.
Sea level variations, either seasonal, short lived or
long term may have been caused by different events or a combination of them. Among the
most obvious, may have been changes in atmospheric pressure, changes in ocean currents,
wind driven waves, storm surges, heavy rainfall increasing the run-off from rivers.
However, as the last ice age has demonstrated, [and of
present concern regarding global warming,] the colouring of Antarctica with Volcanic ash,
or other pollutants and the subsequent warming fusion [fractioning] and melting, of the
polar ice sheets and glaciers was, and is, of catastrophic proportion in comparison
Control of the colour of the Antarctic ice cap, and the
ALBEDO of the white snow cap, may be our immediate concern in the near future.
ROCK MINING and PRIMITIVE
Many regions of the Earth, for
instance equatorial Africa, consist of igneous rock where rain and ocean spray dust are
the only sources of salt . Plants exist which are capable of concentrating such
dilute solutions by evapotranspiration, and there are insects which can collect salt from
water containing less than 0.006% of chlorides and concentrate them in their bodies to
about 0.3% . Until recently some humans survived solely by drinking the blood and urine of
herded and wild animals. Roaming over wide areas these animals collected and concentrated
the salt in their blood by feeding on large quantities of
plants . (Man tribes like the Masai kept their animals alive for systematic bleeding.)
Springer's map of precolonial Africa shows immense areas where this happened and the low
population density of the African
Typical Calcolithic hammer
hinterland can be ascribed to
this diet of near salt-starvation . Furthermore, there is no doubt that early settlements
grew up around salty springs which hunting tribes had discovered by following animals to
their salt licks.
Another remarkable source of
salt is registered in Springer's map, namely where people burned plants to use the
resulting ash as a supply of very poor grade salt; in effect they replaced the stomach of
the ruminant animal by combustion.
Cardona's Salt Mountain
One might almost call it the
first primitive agriculture. Shcultz a describes how "... in the Brazilian rain
forest, deep in the Amazon river watershed, live the Suya people. Their women collect
water hyacinth, dry them and burn their leaves. The ash is then passed through a kind of
grass filter after it has been dispersed in hot water. The filtrate is evaporated in an
earthenware pot over a wood fire until it becomes a thick brown sauce which jellies to a
dirty coloured mass when cooled down . This, divided into minute portions, serves as
Although this ash is too rich
in potassium carbonate to serve as a good source of salt, it is almost the only way to
avoid salt starvation. The other alternative is cannibalism [19b]. As Springer points out,
"It is known that salt-starved animals eat part of their litter in order to stay
alive, and consequently several authors have ventured the opinion that extreme salt hunger
is one of the causes of cannibalism. This seems to have become habitual in parts of Africa
and New Guinea where people have been subject to serious salt deficiency for a long
time..." Primitive man living in rain forests far from the sea suffer the same
deprivation , as generally it is not practicable to transport ocean water, with its 97%
water-content, deep inland. The aborigines in New Guinea, however "... make secret
expeditions to the sea coast.... to put seawater into hollow bamboos which are carried
back to their tribe.
There were many early attempts
to quarry and mine salt. Salt tunnels containing stone hammers and axes have been found at
sites in Asia Minor, Armenia, South America and near the salt river civilization in
Arizona. The Hallstadt salt mines in the Austrian Alps and the Italian mines in Lungro,
Cosenza and Etruscan Volterra supported prehistoric communities forming important centers
of inland civilization. , Surprisingly, very similar specialized tools were used in all
four continents. , , The production of salt from concentrated brine is much easier than
quarrying or mining. The method consists essentially of bringing the natural brine into
shallow ponds enclosed by low earth walls and allowing the Sun to evaporate the water. The
deposited salt layers are then harvested. On the shores of the Dead Sea one may find
disused solar pans that were dyked by ancient saltmakers. In China very old solar pans described in 1882 by the German geologist von
Richthofen are still in operation at the saltwater swamps that lie in the great bend of
the Yellow River. Operations similar to those described by von Richthofen provided salt
for Iran's important Isfahan district. In Africa , Timbuctu and Kano were supplied for
thousands of years from Taodeni and Bilma.
Salt brine boiling
A striking feature of salt
swamps is the red colour caused by algae and bacteria multiplying in their stagnant water.
The salt produced there is red, whereas quarried salt is grey and usually contains gypsum
which gives it less flavour because of reduced solubility. The Madaba Map , dating from
about 550 A.D. shows two ships sailing on the Dead Sea , one loaded with reddish salt from
the old solar ponds and the other with grey salt from the quarry at Mt. Sodom.
The red brine not only looks like blood but also
tastes like it and makes a deep and disturbing impression. The Bible indicates this in
Kings II.3.22 "And they rose up early in the morning and the Sun shone upon the
water, and the Moabites saw the water on the other side as red as blood. And they said,
"This is blood".
Airphoto of 'red' brines in static ponds coloured by 'Dunaliella'
This passage probably
refers to the red salt pans at Sodom. The old name ";Sodom" for the southern end
of the Dead Sea may be a contraction of the Hebrew words "sade" (meaning field)
and "adom" (meaning red). That reddish salt was made there at the time of the
Jewish uprising against Rome about 130 A.D. is proved by pieces of salt found by Yadin in
a burial cave of the period . This salt corresponds in colour and size of crystal, to what
would be expected from careful crystallization from red brine on sticks.
It was the Chinese who
about 400 A.D. conceived the modern idea of drilling
deep into salt deposits and bringing up the brine for evaporation. They used bamboo
pipes and some borings were as deep as 1000 meters. As fuel for evaporation they used
coal, wood or natural gas which came from the same wells .
Maya Treasure: Empire's First Wooden Artifacts
The remnants of a large
factory are found submerged in a peat bog
off the coast of Belize.
By Thomas H. Maugh II, Times Staff Writer
A Louisiana archeologist has discovered the remains of a massive Maya
complex submerged in a lagoon off the southern coast of
Examination of the underwater site also revealed the first wooden
structural artifacts from the empire, including poles and beams used in
building the salt
factories. A wooden paddle from the canoes used to
transport the salt
via inland waterways also was discovered - the
first time such a Maya object has been found, researchers said.
Archeologist Heather McKillop of Louisiana State University reported
today in the Proceedings of the National Academy of Sciences that she
and her colleagues had so far discovered 45 facilities for
production in the mangrove peat bogs of Punta Ycacos Lagoon.
"There are many more sites there," she said in an interview.
The discoveries are "tremendously exciting," said archeologist Tom
Guderjan of Texas Christian University, who was not involved in the
research. "We have never, in that region of the world, found
preservation of architectural materials [wood] like she has found
The discovery of the paddle is particularly intriguing, he said,
because even though Maya art shows canoes, researchers have been unable
to find any traces of them.
"We've all been looking for the canoe," Guderjan said. "It could be six
inches under the muck."
Salt played a
crucial role in ancient economies because humans needed
it to survive and also desired its taste. It also has a variety of
secondary uses, such as preserving food.
The cities of the Maya civilization are largely in areas that have
Researchers previously discovered ancient production
centers in the salt
flats of the Yucatan as well as along the Caribbean
coast, but none is large enough to have accommodated the needs of Maya
society, which dominated much of Central America from approximately the
4th century to the 16th century.
McKillop's findings suggest that many, if not most, of the Maya
facilities were along the coast and became submerged during the last
millennium as ocean levels rose. The immersion actually led to their
preservation. Being buried in peat protects wood from decay, McKillop
said, and being underwater prevents artifacts from being trampled,
making identification and analysis much easier.
McKillop initially identified four
facilities in the
lagoon and decided to expand the search. A team of students equipped
with snorkeling gear divided the surface into grids and looked for
submerged pottery, buildings and other items.
In three weeks of study, they found 41 sites characterized by pottery,
wooden posts and beams, obsidian objects and other artifacts.
The largest structure was at Chak Sak Ha Nal, where 112 posts define
the exterior walls of a rectangular building measuring about 36 by 65
feet. Inside the perimeter are 31 posts marking off rooms. The
arrangement of the structure's other pieces of wood, such as beams,
remains to be mapped.
The interior areas contain remnants of large, apparently mass-produced
urns that sat over fires on clay cylinders about a foot high. Seawater
would have been placed in the urns, scientists say. The water would
boil away, leaving behind the
Although it has just begun examining the sites, McKillop's team has
found extensive evidence of artifacts produced in inland cities,
indicating well-developed trade over the Central American waterways.
The salt would have
been loaded into canoes and paddled upstream, where
it would be exchanged for a variety of goods.
The partially degraded paddle that was discovered - virtually
identical to those seen in Maya art - ties the
salt facilities to
inland trade, McKillop said.
The facilities "represent a new kind of economy that we haven't looked
at before," she said. Researchers have long studied the royal court
workshops in large Maya cities that manufactured goods for the elite.
At the opposite end of the scale, they have studied household economies
where family members made things for their own use.
The salt factories
represent an intermediate stage in which small
groups of people were producing things for the entire society, McKillop
Related Links | Activities | Email List |
PHYSIOLOGY | GEOLOGY | ARCHAEOLOGY | PALAEOCLIMATOLOGY | EUSTATIC SEALEVELS | DEAD SEA LEVELS | PALAEOGEOGRAPHY | PRODUCTION SALTPETER | RELIGION | ECONOMICS | INDIA Monopolies | CHINA Monopolies |
FRANCE the Gabelle | THE MANY USES of Salt | ETYMOLOGY | MONOMANIA
|COPYRIGHT NOTICE AND DISCLAIMER ©
Copyright David Bloch, 1996. All rights
reserved. Copying of this document in any material form is prohibited other than as
necessary for the purpose of viewing on this Web site. The contents of this document is
for general information only. Nothing in this document constitutes legal advice.
||This web page and those derived
from this page, gives collected information derived from other sources believed to be
accurate at the time of storage on available Internet disk space.. These web pages are non
commercial, and academic in purpose, and are stored as personnel information for the page
owner's own use. No warranty of accuracy, reliability or completeness is given and (except
in so far as liability under any statute can be excluded) no responsibility arising in any
other way for errors and omissions or in negligence is accepted by the author and page
owner, David Bloch MRBLOCH SALT ARCHIVE, in the event that others access these pages
back to "SALT made the world go round" | 1 | 2 |
<urn:uuid:2bdc0277-a286-48f9-acbf-4fabde1d935d> | |Scientific Name:||Procellaria aequinoctialis|
|Species Authority:||Linnaeus, 1758|
|Taxonomic Notes:||Procellaria aequinoctialis (Sibley and Monroe 1990, 1993) has been split into P. aequinoctialis and P. conspicillata following Brooke (2004).|
|Red List Category & Criteria:||Vulnerable A4bcde ver 3.1|
|Reviewer/s:||Butchart, S. & Symes, A.|
|Contributor/s:||Barbraud, C., Bugoni, L., Colabuono, F., Cooper, J., Croxall, J., Martin, T., Phillips, R., Robertson, C. & Taylor, G.|
This species is classified as Vulnerable because of suspected rapid declines, although almost no reliable estimates of historical populations exist. Very high rates of incidental mortality in longline fisheries have been recorded in recent years; the probability that these circumstances will continue and its susceptibility to predation and the degradation of breeding habitat indicate that a rapid and on-going population decline is likely.
Procellaria aequinoctialis breeds on South Georgia (Georgias del Sur), Prince Edward Islands (South Africa), Crozet Islands, Kerguelen Islands (French Southern Territories), Auckland, Campbell and Antipodes Islands (New Zealand), and in small numbers in the Falkland Islands (Islas Malvinas). Recently revised population estimates give a global population of c.3 million individuals. This is based on estimates of 773,150 breeding pairs on South Georgia in 2007 (ACAP 2012), 23,600 breeding pairs (9,800 to 36,800) on Crozet (Barbraud et al. in litt. 2008), 186,000-297,000 pairs on the Kerguelen Islands (Barbraud et al. 2009), at least c.100,000 on Disappointment (Auckland) in 1988 (ACAP 2012), 10,000 on Campbell in 1985 (ACAP 2012) and 58,725 on the Antipodes in 2011 (ACAP 2012). At least 55 pairs breed on the Falkland Islands, on Kidney Island, New Island and Bottom Island (Reid et al. 2007). On Bird Island (South Georgia), the population has apparently decreased by 28% over 20 years (Berrow et al. 2000), while in Prydz Bay (Antarctica), the number of birds at sea decreased by 86% during 1981-1993 (Woehler 1996). The species forages north to the subtropics and south to the pack-ice edge off Antarctica (Berrow et al. 2000, Catard et al. 2000, Phillips et al. 2006), and is distributed widely in all southern oceans (Croxall et al. 1984).
Native:Antarctica; Argentina; Australia; Brazil; Chile; Falkland Islands (Malvinas); French Southern Territories (the); Heard Island and McDonald Islands; Madagascar; Mozambique; Namibia; New Zealand; Peru; Saint Helena, Ascension and Tristan da Cunha; South Africa; South Georgia and the South Sandwich Islands; Uruguay
Present - origin uncertain:Bouvet Island; Ecuador
|Range Map:||Click here to open the map viewer and explore range.|
|Population:||A global population of 1,200,000 breeding pairs, down from 1,430,000 pairs in the 1980s, is estimated based on figures from 1985-2011. This equates to an estimated global population of c.3 million mature individuals, based on the estimated number of breeding pairs extrapolated according to a ratio from Brooke (2004).|
|Habitat and Ecology:||Behaviour It is a burrow-nesting annual breeder, laying in mid-October to mid-November (ACAP 2009). Chicks usually fledge in late April (Barbraud et al. 2009). Outside the chick-rearing period, White-chinned Petrels breeding on South Georgia travel to Patagonian Shelf waters to feed (Phillips et al. 2006). Satellite tracking and ring recoveries from birds on Crozet Islands show that they spend the non-breeding season off the coasts of South Africa and Namibia (Barbraud in litt. 2008). Individuals from the Kerguelen Islands also winter off the coasts of South Africa and Namibia over the Benguela Current (Péron et al. 2010a). Diet White-chinned Petrels feed on cephalopods, crustaceans and fish (Berrow et al. 1999, Catard et al. 2000, Delord et al. 2010) and fisheries processing waste or discarded longline baits. Cephalopods were found to comprise the greatest component of the diet in one study (91% occurrence, 92% number, 90% mass) (Colabuono and Vooren 2007). Foraging range Birds range widely when searching for food resources, travelling up to 8,000 km on feeding forays in the breeding season (Berrow et al. 2000, Catard et al. 2000, Phillips et al. 2006, Delord et al. 2010a). Individuals breeding at the Crozet and Kerguelen islands display a bimodal foraging strategy, conducting either short trips to the surrounding shelf or long trips ranging from subtropical waters in the north to Antarctic waters in the south (Catard et al. 2000). Individuals breeding at the Kerguelen Islands target the seasonal ice zone where melting sea ice is gradually broken into floes and forage almost exclusively in open water (Péron et al. 2010b).|
P. aequinoctialis constitute the majority of bird bycatch in Southern Ocean longline fisheries. It is one of the commonest species attending longline vessels off south-east Brazil during winter (Olmos 1997, Bugoni et al. 2008) and off Uruguay (Jiménez et al. 2009), and constitutes virtually all the recorded seabird bycatch from the Namibian hake fishery (Barnes et al. 1997, Petersen et al. 2007). In South Africa, White-chinned Petrels constitute 10% and 55% of the bycatch in pelagic and demersal longline fisheries respectively (Petersen et al. 2007). Prior to the introduction of bird streamer lines as a vessel permit condition in August 2006, approximately, 10% of the 18,000 birds killed annually in the South African hake trawl fishery were White-chinned Petrels (Watkins et al. 2007). In the Indian Ocean, between 2001 and 2003 the legal longline fishery for Patagonian toothfish Dissostichus eleginoides killed c.12,400 P. aequinoctialis per year (Delord et al. 2005). Following the introduction of mitigation measures this figure dropped to approximately 2,500 birds in the 2005-2006 season (CCAMLR 2006), and to 740 birds in the 2008-2009 season (CCAMLR 2010). In addition, an estimated 31,000-111,000 and 50,000-89,000 seabirds in 1997 and 1998 respectively, c.60% of which were P. aequinoctialis, were thought to be killed by IUU vessels (CCAMLR 1997, 1998). In recent years (2006) this figure has fallen to 4,583 seabirds in total (CCAMLR 2006). It is the second most common species caught in the Argentinean longline fleet, with an average capture rate for the period 1999-2003 of 0.014 ± 0.09 individuals per 1,000 hooks (Laich and Favero 2007). During autumn-winter most captures took place in the north of the Patagonian Shelf, whereas in spring-summer most were to the south, between 45-50 degrees South (Laich and Favero 2007). In the Australian Fishing Zone, more than 800 are potentially killed annually (Gales et al. 1998) and, in New Zealand between 2003 and 2005, 14.5% of all the seabirds caught in trawl and longline fisheries and returned for autopsy were P. aequinoctialis (Baird and Smith 2007). Barbraud et al. (2009) estimated that any additional source of mortality that approaches 31,000 individuals would result in a population decline at the Kerguelen Islands. Although only 30% of this number are killed in local waters, and even fewer are now killed due to the implementation of mitigation measures, more than 31,900 White-chinned Petrels are estimated to be killed each year by demersal longline fishing in the Benguela Current marine ecosystem where individuals from the Kerguelen Islands spend the winter. This may mean that the population at the Kerguelen Islands is decreasing, although the additional presence of non-breeders from the Crozet Islands at the Benguela Current means that further research is required to confirm the population decline (Barbraud et al. 2009). Dillingham & Fletcher (2011) estimated that the potential for the world population to sustain additional mortality was 15,000 individuals. Rats (Rattus rattus and R. norvegicus) are significant predators at some breeding sites, such as Crozet (Jones et al. 2008), and cats predate nests at Kerguelen (Barbraud in litt. 2008). At South Georgia, breeding habitat is extensively degraded owing to erosion by expanding populations of Antarctic fur seal Arctocephalus gazella (Berrow et al. 2000). Introduced reindeer Rangifer tarandus also degrade breeding habitat on South Georgia (Poncet 2007). Although no adverse effects have been proven until recently, there are now reports of relatively high frequencies of plastic ingestion (Ryan 2008, Colabuono et al. 2009), as well as the occurrence of persistent organic pollutants (Colabuono et al. 2012) in this species.
Conservation Actions Underway
CMS Appendix II ACAP Annex 1. Population monitoring and foraging ecology studies are being undertaken at South Georgia, Crozet, Prince Edward and Kerguelen (Poncet 2007). Several breeding sites are in protected areas. Conservation Actions Proposed
Continue and extend monitoring studies. Where feasible, eliminate alien predators and reindeer from breeding islands. Promote adoption of best-practice mitigation measures in all fisheries within the species's range, including via intergovernmental mechanisms such as ACAP, FAO, and Regional Fisheries Management Organisations such as CCAMLR. Implement plans to remove rats and reindeer from South Georgia (R. Phillips in litt. 2012). Develop and implement plans to remove pigs from Auckland Island, rats, cats and reindeer from Kerguelen, and rats from Ile de la Possession, Crozet (R. Phillips in litt. 2012).
|Citation:||BirdLife International 2012. Procellaria aequinoctialis. In: IUCN 2012. IUCN Red List of Threatened Species. Version 2012.2. <www.iucnredlist.org>. Downloaded on 20 May 2013.|
|Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please fill in the feedback form so that we can correct or extend the information provided| | 1 | 4 |
<urn:uuid:18c22bbd-772a-4aa7-b949-d7e1abda89b8> | Acute myelogenous leukemia (AML) arises from myeloid precursor cells in the bone marrow. These immature cells or blasts give rise to myeloblasts, monoblasts, erythroblasts, and megakaryoblasts. The symptoms of AML are similar to other acute leukemias and may present with flu-like symptoms, bleeding, and occasionally hepatosplenomegaly and lymphadenopathy. The identification of several chromosomal abnormalities has led to tremendous advances in leukemia research. Some investigators have proposed new classification systems based upon any possible association with myelodysplasia and cytogenetic abnormalities.
Epidemiology Disease Associations Pathogenesis Laboratory/
Other Diagnostic Testing
Gross Appearance and Clinical Variants Histopathological Features and Variants Special Stains/
Differential Diagnosis Prognosis Treatment Commonly Used Terms Internet Links
EPIDEMIOLOGY CHARACTERIZATION SYNONYMS Acute non-lymphoblastic leukemia (ANLL) INCIDENCE 2.1/100,000 in USA M0 5-10% of cases of AML M1 10-20% M2 30-45% M3 5-10% M4 15-25% M5A 5-8% M5B 3-6% M6 5% M7 8-10% AGE (MEDIAN AND RANGE) M0 All age groups M1 Median 46 years M2 20% <25 years
40% >60 years
M3 Median 38 years M4 Median 50 years M5A Median 16 years
75% <25 years
M5B Median 49 years M6 Median 54 years M7 All age groups
FAB CLASSIFICATION CHROMOSOME ABNORMALITY FREQUENCY (PERCENTAGE) M3 t(15;17)(q22;q11-12) 95-100 M4EO inv(16)(p13q22) or t(16;16)(p13;q22) 100 M2 t(8;21)(q22;q22) 18-20 M1-M2 t(9;22)(q34;q11) 8 M1, M2, M4, M5, M6 +8 9 M1, M4, M5 t(v;11)(v;q23)
v=vary chromosomes 9 and 19
Realistic Pathologic Classification of Acute Myeloid Leukemias
Daniel A. Arber
Am J Clin Pathol 2001;115:552-560 Abstract quote
Most classification systems of acute myeloid leukemia (AML) rely largely on the criteria proposed by the French-American-British (FAB) Cooperative Group. The recently proposed World Health Organization (WHO) classification of neoplastic diseases of the hematopoietic and lymphoid tissues includes a classification of AMLs.
The proposed WHO classification of AMLs includes traditional FAB-type categories of disease, as well as additional disease types that correlate with specific cytogenetic findings and AML associated with myelodysplasia. This system includes a large number of disease categories, many of which are of unknown clinical significance, and there seems to be substantial overlap between disease groups in the WHO proposal. Some disease types in the WHO proposal cannot be diagnosed without detailed clinical information, or they are diagnosed only by the cytogenetic findings. In this report, a realistic pathologic classification for AML is proposed that includes disease types that correlate with specific cytogenetic translocations and can be recognized reliably by morphologic evaluation and immunophenotyping and that incorporates the importance of associated myelodysplastic changes.
This system would be supported by cytogenetic or molecular genetic studies and could be expanded as new recognizable clinicopathologic entities are described.
CHARACTERIZATION LYSOZYME, SERUM Inceased in 50% of cases of AML M5A PCR
A novel method for the detection, quantitation, and breakpoint cluster region determination of t(15;17) fusion transcripts using a one-step real-time multiplex RT-PCR.
Choppa PC, Gomez J, Vall HG, Owens M, Rappaport H, Lopategui JR.
IMPATH, 5300 McConnell Ave, Los Angeles, CA 90066, USA.
Am J Clin Pathol 2003 Jan;119(1):137-44 Abstract quote
Individuals with acute promyelocytic leukemia (APL) usually express 1 of 3 primary hybrid transcripts associated with a t(15;17). The 3 fusion transcripts are the result of heterogeneous breakpoint cluster regions (bcr) within the promyelocytic leukemia (PML) gene and are denoted bcr1 (long), bcr2 (variant), and bcr3 (short) forms. Many researchers have shown that real-time quantitative reverse transcriptase-polymerase chain reaction (RT-PCR) of the involved transcript is a valuable tool for monitoring APL and its treatment. In addition, some research suggests that identification of a specific breakpoint region may be used to predict an individual's likelihood of relapse and possibly their response to all-trans retinoic acid treatment.
We describe the first reported 1-step multiplex RT-PCR assay capable of t(15;17) fusion transcript real-time relative quantitation and simultaneous transcript form identification in 2 reactions. This assay uses a novel dual-probe technique to achieve what has required a laborious procedure of 2 or more reactions followed by postamplification analysis.
We found a correlation of 100% in detection and breakpoint determination of the long, short, and variant forms with a breakpoint 5' to nucleotide 1709 compared with results from traditional methods.
GROSS AND CLINICAL DISEASE VARIANTS CHARACTERIZATION
Cutaneous promyelocytic sarcoma at sites of vascular access and marrow aspiration. A characteristic localization of chloromas in acute promyelocytic leukemia?
Sanz MA, Larrea L, Sanz G, Martin G, Sempere A, Gomis F, Martinez J, Regadera A, Saavedra S, Jarque I, Jimenez C, Cervera J, de La Rubia J.
Servicio de Hematologia, Hospital Universitario La Fe, Av. Campanar 21, 46009 Valencia, Spain.
Haematologica 2000 Jul;85(7):758-62 Abstract quote
Extramedullary disease (EMD) is a rare clinical event in acute promyelocytic leukemia (APL). Although the skin is involved in half of the reported EMD cases, the occurrence of cutaneous promyelocytic sarcoma (PS) has been described very rarely.
We report here three cases of PS which have the peculiarity of appearing at sites of punctures for arterial and venous blood and marrow samples (sternal manubrium, antecubital fossa, wrist over the radial artery pulse, catheter insertion scar).
At presentation, all patients had hyperleukocytosis and a morphologic diagnosis of microgranular acute promyelocytic leukemia variant confirmed at the genetic level by demonstration of the specific chromosomal translocation t(15;17). A BCR3 type PML/RARa transcript was documented in the two patients for whom diagnostic RT-PCR was available. Patients had morphologic bone marrow remission at the time the PS appeared. A predilection for the development of cutaneous PS at sites of previous vascular damage has been noted, but the pathogenesis remains largely unknown. A potential role for all-trans retinoic acid has been advocated, although one of the three patients in our series had received no ATRA.
A review of the literature revealed six similar cases and hyperleukocytosis at diagnosis was a consistent finding in all of them. A careful physical examination of these particular sites in the follow-up of patients at risk, as well as cutaneous biopsy and laboratory examination of suspected lesions are strongly recommended.
Sweet's syndrome in acute myelogenous leukemia presenting as periorbital cellulitis with an infiltrate of leukemic cells
Kelli W. Morgan, MD
Jeffrey P. Callen, MD
J Am Acad Dermatol 2001;45:590-5 Abstract quote
Sweet's syndrome is characterized by the abrupt onset of fever, neutrophilic leukocytosis, and erythematous, tender pseudovesiculated plaques or nodules that respond readily to corticosteroid therapy. It is usually distinguished by the presence of mature neutrophils on histopathologic examination.
We describe a 38-year-old man with acute myelogenous leukemia who had an erythematous vesicular eruption of the left eye develop that resembled cellulitis.
A biopsy specimen revealed a dermal infiltrate of mature neutrophils and immature myeloblastic precursors. He later had hemorrhagic pseudovesiculated plaques develop bilaterally on his hands. A biopsy specimen again revealed abundant neutrophils with immature forms. A similar eruption developed at the site of a Hickman catheter placement 4 months later. His skin lesions responded rapidly to oral corticosteroids.
This case is unique in that his initial presentation of Sweet's syndrome resembled orbital cellulitis that was characterized by immature myeloid precursors on histopathology.
HISTOLOGICAL TYPES CHARACTERIZATION BLASTS Type I myeloblast Fine nuclear chromatin with 2-4 distinct nucleoli
Moderate rim of pale to basophilic cytoplasm without azurophilic granules
Type II myeloblast
Similar to type 1 blast with addition of up to 20 delicate azurophilic granules in the cytoplasm
Promyelocyte is larger than type II blast and has numerous azurophilic granules
Type III myeloblast Numerous azurophilic granules but smaller than promyelocyte FRENCH-AMERICAN-BRITISH CLASSIFICATION (FAB)
By definition, acute leukemia is diagnosed with 20%of type I and II blasts but some variants such as M3 rarely have 20% blasts
MPO is myeloperoxidase
SBB is Sudan Black B
NSE is non specific esterase
Acute myeloblastic leukemia, minimally differentiated
>/= 20% blasts
<3% blasts reactive for MPO, SBB, or NSE
>/= 20% blasts express one or more myeloid antigens: CD13, CD14, CD33
May be TdT positive but blasts negative for lymphocyte antigens
Diagnostic Criteria for Minimally Differentiated Acute Myeloid Leukemia (AML-M0) Evaluation and a Proposal
Zahid Kaleem, MD, and Glenda White, MT(ASCP)
Am J Clin Pathol 2001;115:876-884 Abstract quote
We studied immunophenotypic features of 30 cases of minimally differentiated acute myeloid leukemia (AML-M0) using multiparameter flow cytometry and immunohistochemistry and evaluated the immunophenotypic features of previously reported cases to facilitate correct identification of myeloid lineage. All but 1 of our 30 cases expressed CD13 and/or CD33; 2 expressed CD19; 1 expressed CD10; none expressed both CD10 and CD19. Eleven of 30 cases expressed T-cell–associated antigens. All but 2 cases expressed CD34 and/or HLA-DR. Twelve of 27 cases expressed terminal deoxynucleotidyl transferase. Myeloperoxidase (MPO) expression was seen in 22 of 22 cases by immunohistochemistry and 1 of 4 by flow cytometry. None of 27 cases expressed cyCD3 and cyCD79a.
We propose following modified criteria for AML-M0: (1) standard criteria for acute leukemia; (2) undetectable or less than 3% MPO or Sudan black B staining in blasts; (3) lack of expression of lymphoid-specific antigens, cyCD3 for T lineage and cyCD79 and cyCD22 for B lineage; and (4) positivity for any of the myelomonocytic lineage antigens known not to be expressed on normal T or B lymphocytes or positivity for MPO as detected by ultrastructural cytochemistry, immunohistochemistry, or flow cytometry.
Acute myeloblastic leukemia, without maturation
>/= 20% blasts
>/= 3% blasts reactive for MPO or SBB
<10% or marrow nucleated cells are promyelocytes or more mature neutrophils
Blasts positive for CD13, CD14, CD33
Acute myeloblastic leukemia, with maturation
>/= 20% blasts
>/= 3% blasts reactive for MPO or SBB
>/= 10% of marrow nucleated cells are promyelocytes or more mature neutrophils
Blasts myeloid antigen positive
40-80% t(8;21) associated cases are CD19+
20% of t(8;21) associated cases are Tdt+
Acute promyelocytic leukemia
>/= 20% blasts and abnormal promyelocytes
Intense MPO and SBB positivity
Promyelocytes and blasts with mulitiple Auer rods (faggot cells)
t(15;17) chromosomal abnormality
Promyelocytes are HLA-DR negative in most cases
Microgranular (hypogranular) M3 Morphologic, Cytogenetic, and Molecular Abnormalities in Therapy-Related Acute Promyelocytic Leukemia
C. Cameron Yin, MD, PhD, Armand B. Glassman, MD, Pei Lin, MD, Jose R. Valbuena, MD, Dan Jones, MD, PhD, Rajyalakshmi Luthra, PhD, and L. Jeffrey Medeiros, MD
Am J Clin Pathol 2005;123:840-848 Abstract quote
We describe 17 cases of therapy-related acute promyelocytic leukemia (tAPL). Treatment for the initial neoplasms (mostly carcinomas and non-Hodgkin lymphomas) included radiation and chemotherapy in 11 patients, radiation in 3, and chemotherapy in 3. The interval between the initial neoplasm and tAPL ranged from 17 to 116 months (median, 40 months).
Morphologically, all 13 cases with available bone marrow aspirate smears showed tAPL. Dyserythropoiesis or dysmegakaryopoiesis was identified in 11 cases. In 2 cases, too few nonneoplastic cells and, in all cases, too few maturing granulocytes were present to assess for dysplasia. Conventional cytogenetics or fluorescence in situ hybridization (FISH) showed the t(15;17)(q22;q21) in all cases; 6 as a sole abnormality, 9 with additional abnormalities, and 2 assessed only by FISH. Reverse transcription–polymerase chain reaction (PCR) studies showed PML/RARa in 13 cases (8 short form, 5 long form). Mutations of the flt3 gene were detected by PCR in 5 (42%) of 12 cases.
We conclude that dysplastic features, secondary cytogenetic abnormalities, and flt3 mutations are common in tAPL.
Acute myelomonocytic leukemia
>/= 20% myeloblasts, monoblasts, and promonocytes
>/= 80% monocytic cells in marrow
>/= 5x10*9/L monocytic cells in blood
>/= 20% neutrophils and precursors in marrow
Monocytic cells reactive for NSE
Abnormal eosinophils in M4 with associated inv(16) chromosome abnormality
Varying proportions of blasts and monocytic cells positive for CD13, CD14, CD15, CD33
Monocytes positive for CD36
Acute myelomonocytic leukemia with increased marrow eosinophils
Dysplasia and High Proliferation Rate Are Common in Acute Myeloid Leukemia With inv(16)(p13q22)
Xiaoping Sun, MD, PhD, L. Jeffrey Medeiros, MD, Di Lu, MD, PhD, George Z. Rassidakis, MD, and Carlos Bueso-Ramos, MD, PhD
Am J Clin Pathol 2003;120:236-245 Abstract quote
Acute myeloid leukemia (AML) with inv(16)(p13q22), also known as M4Eo, is a distinct type of AML with a favorable prognosis associated with abnormal bone marrow eosinophils.
We reviewed the morphologic findings of archival bone marrow specimens with M4Eo, specifically assessing for dysplasia, and performed immunohistochemical studies to assess the growth fraction using the MIB-1 (Ki-67) antibody. We also assessed the apoptotic rate by terminal deoxynucleotidyl transferase–mediated deoxyuridine triphosphate–nick end labeling. All assessable cases had more than 10% dysplastic forms in at least 1 lineage. Seventeen cases had 10% or more dysplastic forms, and 3 cases had more than 50% dysplastic forms in at least 2 lineages. Immunoreactivity for Ki-67 was higher in M4Eo than in other AML types (P = .000). The apoptotic rate in M4Eo was similar to other AML types (P = .724).
Our data show that dysplasia is a prominent feature, but not a prognostic indicator, in M4Eo. M4Eo is associated with a significantly higher proliferation rate than other AML types.
Acute monoblastic leukemia, poorly differentiated
>/= 80% monocytic cells
Monoblasts >/=80% of monocytic cells
Monoblasts and promonocytes NSE positive although 10-20% cases are negative
Monoblasts usually MPO and SBB negative
Varying proportions of CD13, CD14, CD15, and CD33 positive
Monoblasts CD36 and usually CD4+
Acute monoblastic leukemia, differentiated
>/=80 monocytic cells
Monoblasts<80% of monocytic cells
Monoblasts and promonocytes NSE positive
Promonocytes scattered MPO and SBB positive granules
Varying proportions of CD13, CD14, CD15, and CD33 positive
Monoblasts and promonocytes CD36+
>/= 50% erythroid precursors
>/= 20% of nonerythroid precursors are myeloblasts
Auer rods may be present in myeloblasts
Erythroid precursors frequently PAS positive
Myeloid antigens in myeloblasts
Erythroid precursors express glycophorin A, hemoglobin A, and express CD36
Acute megakaryoblastic leukemia
>/= 30% blasts
>/= 50% megakaryocytic cells by morphology, immunophenotype studies, or electron microscopy
Megakaryocytes cells express myeloid antigen CD33
Platelet glycoproteins (CD41 and CD61)
ADDITIONAL VARIANTS 3(q21;q26) t(6;9)(p23;q34) Acute Myeloid Leukemia With t(6;9)(p23;q34) Is Associated With Dysplasia and a High Frequency of flt3 Gene Mutations
Mauricio P. Oyarzo,MD , Pei LinMD, , Armand GlassmanMD, , Carlos E. Bueso-RamosMD, PhD, , Rajyalakshmi LuthraPhD, , and L. Jeffrey MedeirosMD,
Am J Clin Pathol 2004;122:348-358 Abstract quote
We report 12 cases of t(6;9)(p23;q34)-positive acute myeloid leukemia (AML), all classified using the criteria of the World Health Organization classification.
There were 10 women and 2 men with a median age of 51 years (range, 20-76 years). Dysplasia was present in all cases (9 previously untreated), and basophilia was present in 6 (50%). Immunophenotypic studies showed that the blasts were positive for CD9, CD13, CD33, CD38, CD117, and HLA-DR in all cases assessed. CD34 was positive in 11 (92%) of 12, and terminal deoxynucleotidyl transferase was positive in 7 (64%) of 11 cases. The t(6;9) was the only cytogenetic abnormality detected in 7 cases (58%), and 5 cases had additional chromosomal abnormalities. Of 8 cases assessed, 7 (88%) had flt3 gene mutations.
We conclude that t(6;9)-positive AML cases have distinctive morphologic features, an immunophenotype suggesting origin from an early hematopoietic progenitor cell, and a high frequency of flt3 gene mutations.
- Correlation between karyotype and quantitative immunophenotype in acute myelogenous leukemia with t(8;21).
Khoury H, Dalal BI, Nantel SH, Horsman DE, Lavoie JC, Shepherd JD, Hogge DE, Toze CL, Song KW, Forrest DL, Sutherland HJ, Nevill TJ.
1Department of Cellular and Molecular Biology, Princess Margaret Hospital, Toronto, Ontario, Canada.
Mod Pathol. 2004 Oct;17(10):1211-6 Abstract quote.
Acute myelogenous leukemia with t(8;21) is a distinct clinicopathologic entity in which the malignant myeloblasts display a characteristic pattern of surface antigen expression. Quantitative analysis of surface marker expression in patients with this chromosomal abnormality compared to acute myelogenous leukemia patients with a different karyotype has not been reported.
From 305 consecutive newly diagnosed acute myelogenous leukemia patients underwent immunophenotyping and cytogenetic analysis at our center; 16 patients (5.2%) had a t(8;21). Fluorescence intensity values were obtained, using a set of reference microbeads, by conversion of mean channel fluorescence to molecular equivalent of soluble fluorochrome. Patients with t(8;21) displayed higher levels of CD34, HLA-DR and MPO expression (P<0.001 for each) and lower levels of CD13 (P=0.03) and CD33 (P=0.02) expression. In order to study the sensitivity, specificity and predictive value of these markers, molecular equivalent of soluble fluorochrome thresholds were statistically determined. The statistically established threshold for each of the individual markers (CD34>60.5 x 10(3), HLA-DR>176.1 x 10(3), MPO>735.1 x 10(3), CD13<24.3 x 10(3) and CD33<17.3 x 10(3)) had a sensitivity of 100%, a specificity of 62-92% and a positive predictive value of 7-45%.
In multivariate analysis, two quantitative patterns (CD34>60.5 x 10(3) and MPO>176.1 x 10(3); CD33<17.3 x 10(3) and MPO>176.1 x 10(3)) had a sensitivity, specificity and positive predictive value of 100%. These aberrant phenotypic patterns might help identify patients with t(8;21) at diagnosis and could be useful in minimal residual disease monitoring.
Acute basophilic leukemia >/= 20% blasts
Evidence of differentiation to basophils by light or EM
Usually + for metachromatic stains
HYPOCELLULAR AML Acute myeloid leukemia and transient myeloproliferative disorder of Down syndrome ACUTE MYELOFIBROSIS BIPHASIC
Biphasic acute myeloid leukemia with near-tetraploidy and immunophenotypic transformation.
Imkie M, Davis MK, Persons DL, Cunningham MT.
Department of Pathology, University of Kansas Medical Center, Kansas City, Kan 66223, USA.
Arch Pathol Lab Med. 2004 Apr;128(4):448-51. Abstract quote
This report describes a case of acute myeloid leukemia (subtype M1) with biphasic morphology. The bone marrow biopsy showed 2 distinct regions of blasts, one containing large cells and the other small cells. Morphometric and DNA ploidy analysis showed that the mean nuclear area and mean DNA index for the large cell region were 2-fold higher than those for the small cell region. Cytogenetic analysis showed an abnormal near-tetraploid clone. The tumor relapsed following aggressive therapy. The cells from the relapse specimen were similar to the original small cell region with respect to nuclear area and DNA index; however, there was immunophenotypic transformation with gain of CD7 and gain of CD56.
Cytogenetically, the relapse specimen showed no evidence of the near-tetraploid clone, but instead had a previously unidentified abnormal clone containing 46 chromosomes and structural abnormalities of 2q and 7q. Biphasic morphology in acute myeloid leukemia may be predictive of a near-tetraploid subclone and immunophenotypic transformation.
GRANULOCYTIC SARCOMA (LEUKEMIA CUTIS)
Concurrent Chronic Lymphocytic Leukemia Cutis and Acute Myelogenous Leukemia Cutis in a Patient with Untreated CLL
Michael K. Miller, M.D.; James A. Strauchen, M.D.; Khanh T. Nichols, M.D.; Robert G. Phelps, M.D.
From the Departments of Dermatopathology, Dermatology, and Pathology, Mount Sinai School of Medicine, New York, New York.
Am J Dermatopathology 2001;23:334-340 Abstract quote
Patients who have chronic lymphocytic leukemia (CLL) are known to have a high frequency of second malignant neoplasms. However, acute myelogenous leukemia (AML) occurring concurrent with or after a diagnosis of CLL is extremely rare.
In this article we report a case of AML developing in a 55-year-old male with a 6-year history of untreated CLL. The diagnosis was facilitated by touch preparation of a skin punch biopsy specimen. The patient presented with a two-week history of fever, weakness, anasarca, and a skin rash. Physical examination revealed pink to skin-colored firm papules, which coalesced into indurated plaques on his trunk, upper extremities, and face. The lesions, in combination with generalized edema, produced a leonine facies.
Touch prep of the biopsy showed medium to large blasts, large monocytoid cells, and numerous small mature lymphocytes, providing the preliminary diagnosis of a second, previously undiagnosed myelomonocytic malignancy in this patient.
The initial diagnosis was subsequently confirmed by histologic, cytochemical, immunohistochemical and flow cytometry studies. This is the first reported case of CLL with concurrent AML in which rapid touch prep of a skin punch biopsy facilitated diagnosis.
PSEUDO-CHEDIAK-HIGASHI ANOMALY Acute Myeloid Leukemia With Pseudo–Chédiak-Higashi Anomaly Exhibits a Specific Immunophenotype With CD2 Expression
Hong Chang, PhD, MD, FRCPC, and Qi Long Yi, PhD
Am J Clin Pathol 2006;125:791-794Abstract quote
Acute myeloid leukemia (AML) with pseudo–Chédiak-Higashi (PCH) anomaly is a rare morphologic entity.
We characterized 5 cases by multiparameter flow cytometry and found that in all cases, the blasts aberrantly expressed CD2, a pan–T cell–associated marker, in addition to their myeloid-associated markers.
In contrast, CD2 was expressed in only 25 (17.9%) of 140 cases of newly diagnosed AML without PCH anomaly. CD2 expression correlated strongly with AML with PCH anomaly (P < .01), suggesting a link between a specific immunophenotypic marker, CD2, and AML with PCH anomaly.
CHARACTERIZATION Sudan Black B (SBB) Positive in cells of neturophili, eosinophil, and monocyte lineage Nonspecific esterase (NSE) Positive in monocytes Myeloperoxidase (MPO) Positive in cells of neturophili, eosinophil, and monocyte lineage
Evaluation of Bone Marrow Specimens With Acute Myelogenous Leukemia for CD34, CD15, CD117, and Myeloperoxidase Comparison of Flow Cytometric and Enzyme Cytochemical Versus Immunohistochemical Techniques
Cherie H. Dunphy, MD, Jacek M. Polski, MD, H. Lance Evans, MD, and Laura J. Gardner, MD
From the Division of Hematopathology, Department of Pathology, St Louis University Health Sciences Center, St Louis, Mo (Drs Dunphy, Evans, and Gardner); and the Department of Pathology, University of South Alabama, Mobile, Ala (Dr Polski).
Arch Pathol Lab Med 2001;125:1063–1069. Abstract quote
Context. —Immunophenotyping of bone marrow (BM) specimens with acute myelogenous leukemia (AML) may be performed by flow cytometric (FC) or immunohistochemical (IH) techniques. Some markers (CD34, CD15, and CD117) are available for both techniques. Myeloperoxidase (MPO) analysis may be performed by enzyme cytochemical (EC) or IH techniques.
Objective. —To determine the reliability of these markers and MPO by these techniques, we designed a study to compare the results of analyses of these markers and MPO by FC (CD34, CD15, and CD117), EC (MPO), and IH (CD34, CD15, CD117, and MPO) techniques.
Materials and Methods. —Twenty-nine AMLs formed the basis of the study. These AMLs all had been immunophenotyped previously by FC analysis; 27 also had had EC analysis performed. Of the AMLs, 29 had BM core biopsies and 26 had BM clots that could be evaluated. The paraffin blocks of the 29 BM core biopsies and 26 BM clots were stained for CD34, CD117, MPO, and CD15. These results were compared with results by FC analysis (CD34, CD15, and CD117) and EC analysis (MPO).
Results. —Immunodetection of CD34 expression in AML had a similar sensitivity by FC and IH techniques. Immunodetection of CD15 and CD117 had a higher sensitivity by FC analysis than by IH analysis. Detection of MPO by IH analysis was more sensitive than by EC analysis. There was no correlation of French-American-British (FAB) subtype of AML with CD34 or CD117 expression. Expression of CD15 was associated with AMLs with a monocytic component. Myeloperoxidase reactivity by IH analysis was observed in AMLs originally FAB subtyped as M0.
Conclusions. —CD34 can be equally detected by FC and IH techniques. CD15 and CD117 are better detected by FC analysis and MPO is better detected by IH analysis.
- Immunoreactivity of MIC2 (CD99) and terminal deoxynucleotidyl transferase in bone marrow clot and core specimens of acute myeloid leukemias and myelodysplastic syndromes.
Kang LC, Dunphy CH.
Division of Hematopathology, Department of Pathology and Laboratory Medicine, University of North Carolina, Chapel Hill, NC 27599-7525, USA.
Arch Pathol Lab Med. 2006 Feb;130(2):153-7. Abstract quote
CONTEXT: MIC2 ("thymus leukemia") antigen has been shown to be expressed by T cells and monocytes, as well as B cells and granulocyte-lineage cells. It is most intensely expressed by the most immature thymus T-lineage cells and is more intensely expressed by CD34-positive/CD33-positive myeloid cells (compared to more mature myeloid cells) and the earliest CD34-positive/CD10-positive B-cell precursor cells (compared to cells of later B-cell precursor stages). CD99 (MIC2) is characteristically expressed in precursor B- and T-cell lymphoblastic lymphomas/leukemias, as well as in Ewing sarcoma/primitive neuroectodermal tumors (ES/PNET). It has also been shown to be expressed in a few terminal deoxynucleotidyl transferase (TdT)-positive myeloid processes, but has been uniformly negative in TdT-negative myeloid processes. A more recent study showed that 43% of acute myeloid leukemias (AMLs) and 55% of chloromas express CD99, concluding that CD99 is commonly expressed in AML and rarely seen in myeloproliferative disorders, myelodysplastic syndromes, or normal bone marrow. Although this study speculated that MIC2 expression was probably not limited to TdT-positive AML, there was no comparison with TdT reactivity in this study.
OBJECTIVE: Since AML and high-grade myelodysplastic syndrome may occasionally be difficult to distinguish morphologically from acute lymphoblastic leukemia and ES/ PNET, we undertook a study to analyze MIC2 expression in conjunction with TdT reactivity in distinguishing AML or high-grade myelodysplastic syndrome from acute lymphoblastic leukemia and ES/PNET.
DESIGN: We studied bone marrow core and clot paraffin specimens from AML (classified according to criteria of the World Health Organization; n = 49), myelodysplastic syndromes (n = 4), precursor B-cell acute lymphoblastic leukemia (n = 4), ES/PNET (n = 1), and normal bone marrow (n = 3) with MIC2 (CD99) and TdT immunohistochemistry.
RESULTS: Overall, CD99 was expressed in 24 (49%) of 49 AML cases, including all (11/11) TdT-positive cases. CD99 was expressed in all subtypes of AML except M5. Myelodysplastic syndromes and normal bone marrow specimens were uniformly CD99 negative. Expression of TdT was limited to a subset of AML-M0, -M1, -M2, and -M4, and AML with multilineage dysplasia.
CONCLUSIONS: In contrast to a previous study, CD99 expression was not restricted to TdT-positive hematologic proliferations. In particular, the CD99-positive M3 and M7 AMLs were TdT negative. An M5 AML may likely be excluded based on a uniform TdT-negative/CD99-negative immunophenotype. In addition, in our experience, CD99 should be routinely evaluated on bone marrow clots, owing to decreased reactivity or loss of reactivity in rapid decalcifying (RDO) solution-decalcified specimens.
Usefulness of Anti-CD117 in the Flow Cytometric Analysis of Acute Leukemia
Christine P. Hans, MD, William G. Finn, MD, Timothy P. Singleton, MD,* Bertram Schnitzer, MD, and Charles W. Ross, MD
Am J Clin Pathol 2002;117:301-305 Abstract quote
We assessed the diagnostic usefulness of adding anti-CD117 to our existing flow cytometric profile in the analysis of 150 consecutive cases of acute leukemia (de novo or relapsed acute myelogenous leukemia [AML], AML arising in myelodysplastic syndrome, blast crisis of chronic myelogenous leukemia [CML], acute lymphoblastic leukemia, acute unclassifiable leukemia, and biphenotypic leukemia).
CD117 was expressed on more than 10% of blasts in 64% of de novo AMLs (42/66), 95% of relapsed AMLs (19/20), 75% of AMLs arising from a myelodysplastic syndrome (6/8), and 25% of myeloid blast crisis in CMLs (1/4). CD117 was not expressed in acute lymphoblastic, acute biphenotypic, or unclassified leukemia or lymphoid blast crisis of CML. The specificity, positive predictive value, sensitivity, and negative predictive value of CD117 for AML were 100%, 100%, 69%, and 62%, respectively. CD117 is a specific marker for myeloblastic leukemias. Sensitivity is greatest in French-American-British M2 and relapsed AML. Intensity of CD117 expression is dim.
Despite the high specificity and positive predictive value, the addition of anti-CD117 to our panel did not prove essential for the assignment of blast lineage.
PODOCALYXIN PodocalyxinA Marker of Blasts in Acute Leukemia
Todd W. Kelley, MD, etal.
Am J Clin Pathol 2005;124:134-142 Abstract quote
Podocalyxin is a CD34 family member expressed by podocytes, vascular endothelium, mesothelium, and a subset of hematopoietic progenitors. Podocalyxin expression was not observed in the hematopoietic cells of normal adult bone marrow samples. However, podocalyxin was expressed by blasts in 30 (77%) of 39 cases of acute myeloid leukemia (AML), 22 (81%) of 27 cases of acute lymphoblastic leukemia (ALL), and 13 (87%) of 15 cases of cutaneous myeloid sarcoma.
No correlation with CD34 expression by immunohistochemical analysis was seen. Wilms tumor 1 (WT1) expression was detected in blasts in 17 AML cases (44%) and 21 ALL cases (78%). There was no correlation between WT1 and podocalyxin expression. We conclude that podocalyxin is expressed commonly by blasts in ALL and AML. Analysis of the expression of CD34 and podocalyxin increases sensitivity for the immunophenotypic detection of leukemic blasts compared with the analysis of CD34 alone.
Therefore, podocalyxin seems to complement CD34 as a useful hematopoietic blast marker. The physiologic role of podocalyxin in leukemic blasts remains unknown.
VGEF Immunohistochemical Detection of VEGF in the Bone Marrow of Patients With Acute Myeloid Leukemia
Correlation Between VEGF Expression and the FAB Category
Minoo Ghannadan, PhD,1 Friedrich Wimazal, MD,1 Ingrid Simonitsch, MD,2 Wolfgang R. Sperr, MD,1 Matthias Mayerhofer, MD,1 Christian Sillaber, MD,1 Alexander W. Hauswirth, MD,1 Helmut Gadner, MD,3 Andreas Chott, MD,2 Hans-Peter Horny, MD,4 Klaus Lechner, MD,1 and Peter Valent, MD
Am J Clin Pathol 2003;119:663-671 Abstract quote
We studied vascular endothelial growth factor (VEGF) expression in bone marrow sections obtained from 3 healthy donors and 41 patients with acute myeloid leukemia (AML) of various French-American-British (FAB) subtypes by immunohistochemical analysis using an anti-VEGF antibody. In normal bone marrow, the anti-VEGF antibody reacted with myeloid progenitor cells and megakaryocytes but not with erythroid cells or mature granulocytic cells.
High levels of VEGF were found in the bone marrow in patients with AML-M1, -M2, -M3, -M4, -M4Eo, and -M5. In these leukemias, the vast majority of myeloblasts (>90%) expressed VEGF. By contrast, in AML-M0, the percentage of VEGF-positive blasts was lower in most cases (median, 42%), and if at all detectable, these blast cells contained only trace amounts of VEGF. In AML-M3 and -M4Eo, maturing granulocytes failed to express VEGF similar to granulocytes in normal bone marrow.
In AML-M6, myeloblasts exhibited VEGF, whereas erythroid cells did not. In AML-M7, blast cells and megakaryocytes were identified as major sources of VEGF. In summary, VEGF expression in the bone marrow is restricted to certain stages of differentiation and maturation of myeloid cells and correlates with the FAB category.
DIFFERENTIAL DIAGNOSIS CHARACTERIZATION ACUTE PANMYELOSIS WITH MYELOFIBROSIS
- Acute panmyelosis with myelofibrosis: an entity distinct from acute megakaryoblastic leukemia.
Orazi A, O'malley DP, Jiang J, Vance GH, Thomas J, Czader M, Fang W, An C, Banks PM.
1Department of Pathology and Laboratory Medicine, Indiana University School of Medicine, Indianapolis, Indiana, USA.
Mod Pathol. 2005;18:l603-614 Abstract quote
The WHO criteria for diagnosing acute panmyelosis with myelofibrosis are somewhat distinct from those for acute megakaryoblastic leukemia. However, clinical and hematopathologic findings partially overlap. This has raised questions as to whether these are indeed separate, definable entities.
To determine the potential importance of bone marrow biopsy supplemented by immunohistochemistry in distinguishing between these two conditions, we studied 17 bone marrow biopsies of well-characterized cases of acute panmyelosis with myelofibrosis (six cases) and acute megakaryoblastic leukemia (11 cases). We compared blast frequency, reticulin content, CD34 expression, and the degree of megakaryocytic differentiation of the blast cells in these two conditions. Our results demonstrate important differences.
Acute panmyelosis with myelofibrosis is characterized by a multilineage myeloid proliferation with a less numerous population of blasts than acute megakaryoblastic leukemia (P<0.01). In the former condition, blasts are always positive with CD34, while in acute megakaryoblastic leukemia they express CD34 in 60% of the cases. The blasts in acute panmyelosis with myelofibrosis only rarely express megakaryocytic antigens. By contrast, acute megakaryoblastic leukemia has a significantly higher proportion of blasts expressing megakaryocytic antigens (P<0.01 with CD42b).
Our results confirm that histology supplemented by immunohistochemistry permits the distinction of these conditions in routinely processed bone marrow biopsies.
ACUTE PROMYELOCYTIC LEUKEMIA (M3)
Leukemias resembling acute promyelocytic leukemia, microgranular variant.
Nagendra S, Meyerson H, Skallerud G, Rosenthal N.
Department of Pathology, University of Iowa College of Medicine, Iowa City, USA.
Am J Clin Pathol 2002 Apr;117(4):651-7 Abstract quote
Acute promyelocytic leukemia (APL) should be distinguished from other subtypes of acute myeloid leukemia (AML) because of the increased risk of disseminated intravascular coagulation (DIC) and its response to arsenic compounds and retinoids. Some cases of AML seem morphologically similar to the microgranular variant of APL (French-American-British [FAB] AML-M3v) but lack the t(15;17).
We evaluated 8 cases of APL-like leukemias for subtle morphologic, cytochemical, immunophenotypic, and cytogenetic differences compared with 5 cases of promyelocytic leukemia/retinoic receptor alpha (PML/RARalpha)-positive APL (FAB AML-M3v). We also evaluated both groups for the presence of DIC. No differences among the groups were noted in blast size, chromatin pattern, nuclear morphologic features, intensity of myeloperoxidase staining, or presence of Auer rods. Immunophenotypes were similar; both types of cases lacked CD34 and HLA-DR and were CD13+ and CD33+. Two cases of APL-like leukemias also were CD56+. DIC was present in 2 patients with M3v.
Our study shows that there are no definitive morphologic, cytochemical, or immunophenotypic findings that can distinguish these cases from PML/RARalpha-positive APL.
Expression of CD117 and CD11b in Bone Marrow Can Differentiate Acute Promyelocytic Leukemia From Recovering Benign Myeloid Proliferation
Edgar G. Rizzatti, MD, Aglair B. Garcia, MT, Fernando L. Portieres, MD, Dirceu E. Silva, MD, Sérgio L.R. Martins, MD, PhD, and Roberto P. Falcão, MD, PhD
Am J Clin Pathol 2002;118:31-37 Abstract quote
The morphologic characteristics of bone marrow aspirates from patients recovering from acute agranulocytosis may be closely similar to the pattern observed in cases of acute promyelocytic leukemia (APL).
The clinical manifestation also can be ambiguous in a substantial number of cases. The immunophenotypic features of bone marrow from 5 patients recovering from acute agranulocytosis, showing an increase in the percentage of promyelocytes (26%-66%), were compared with the immunophenotype of 31 consecutive patients with APL whose diagnosis was confirmed by PML-RAR alpha gene rearrangement. All markers were similarly expressed, except for CD117 and CD11b. CD117 was positive in 24 (77%) of the APL cases and in none of the acute agranulocytosis cases. On the other hand, CD11b was positive in 5 (100%) of the acute agranulocytosis cases and in only 2 (6%) of the APL cases.
Thus, the CD117CD11b+ phenotype was detected in all patients recovering from agranulocytosis and in only 1 (3%) of 31 APL cases. Therefore, we suggest that the combination of both markers is helpful in the differentiation of APL from recovering benign myeloid proliferation.
Atypical blasts and bone marrow necrosis associated with near-triploid relapse of acute promyelocytic leukemia after arsenic trioxide treatment.
Chim CS, Lam CC, Wong KF, Man C, Kam S, Kwong YL.
Department of Medicine and Pathology, Queen Mary Hospital and Department of Pathology, Queen Elizabeth Hospital, Hong Kong.
Hum Pathol 2002 Aug;33(8):849-51 Abstract quote
The pathologic features of acute promyelocytic leukemia (APL) with t(15;17)(q22;q21) are highly characteristic, which with few exceptions enable a firm diagnosis to be made on morphologic grounds.
An APL patient in first relapse presented with large, bizarre circulating blasts and bone marrow necrosis 2 weeks after chemotherapy consolidation for an arsenic trioxide-induced remission. Although a morphologic diagnosis could not be reached, cytogenetic investigations showed a near-triploid clone with t(15;17), confirming APL in second relapse.
This case showed that clonal evolution with additional karyotypic aberrations might alter the blast morphology and pathologic features in APL.
THROMBOPOIETIN Thrombopoietin Administered During Induction Chemotherapy to Patients With Acute Myeloid Leukemia Induces Transient Morphologic Changes That May Resemble Chronic Myeloproliferative Disorders
Vonda K. Douglas, MD,1 Martin S. Tallman, MD,3 Larry D. Cripe, MD,4 and LoAnn C. Peterson, MD
Am J Clin Pathol 2002;117:844-850 Abstract quote
Thrombopoietin (TPO), a potent stimulator of megakaryocyte and platelet production, has been used in clinical trials to reduce thrombocytopenia after chemotherapy in patients with acute myeloid leukemia (AML).
We report that TPO therapy is associated with peripheral blood and bone marrow findings that can mimic myeloproliferative disorders. Peripheral blood and bone marrow samples of 13 patients with AML who received TPO were examined. A subset of bone marrow samples exhibited hypercellularity, megakaryocytic hyperplasia, and reticulin fibrosis after TPO administration. Cases demonstrated as many as 58.4 megakaryocytes per high-power field (MHPF) compared with 3.7 MHPF in the control group. Megakaryocytic atypia, increased mitoses, emperipolesis, intrasinusoidal megakaryocytes, and thickened trabeculae also were seen. Peripheral blood findings included leukoerythroblastosis, leukocytosis, thrombocytosis, and circulating megakaryocyte nuclei. Changes resolved within 3 months after discontinuation of TPO.
This rapid resolution of the morphologic abnormalities induced by TPO distinguishes these findings from those seen in true chronic myeloproliferative disorders.
PROGNOSIS AND TREATMENT CHARACTERIZATION PROGNOSIS
Poor prognostic factors:
Value of combined morphologic, cytochemical, and immunophenotypic features in predicting recurrent cytogenetic abnormalities in acute myeloid leukemia.
Arber DA, Carter NH, Ikle D, Slovak ML.
Hum Pathol. 2003 May;34(5):479-83. Abstract quote
To evaluate the reliability of previously described morphologic, cytochemical and immunophenotypic criteria for the identification of acute myeloid leukemias (AMLs) with t(8;21), inv(16)/t(16;16) and t(15;17), 300 cases were reviewed retrospectively. Eighteen AMLs with features of t(8;21), 31 with features of inv(16)/t(16;16), and 22 with features of t(15;17) were identified.
Cytogenetic studies were available for 228 cases and identified 15 cases of t(8;21), 30 cases of inv(16)/t(16;16), 18 cases of t(15;17) and 11 cases 11q23 AML. The true positive rate for pre-cytogenetic evaluation was 95% for t(15;17), 88% for inv(16)/t(16;16) and 87.5% for t(8;21). No difference in 5-year survival was identified in the precytogenetic and corresponding cytogenetic disease groups. No specific features to predict 11q23 abnormalities were identified.
This study confirms the reliability of a combined morphologic, cytochemical and immunophenotypic approach to the initial classification of AML. Cytogenetic studies are still needed on all cases to identify the small proportion of cases that will be missed by these methods and to identify other significant cytogenetic abnormalities in AML.
BODY MASS INDEX
- Mortality in overweight and underweight children with acute myeloid leukemia.
Lange BJ, Gerbing RB, Feusner J, Skolnik J, Sacks N, Smith FO, Alonzo TA.
Division of Oncology, The Children's Hospital of Philadelphia, Philadelphia, Pa 19104, USA.
JAMA. 2005 Jan 12;293(2):203-11 Abstract quote.
CONTEXT: Current treatment for acute myeloid leukemia (AML) in children cures about half the patients. Of the other half, most succumb to leukemia, but 5% to 15% die of treatment-related complications. Overweight children with AML seem to experience excess life-threatening and fatal toxicity. Nothing is known about how weight affects outcomes in pediatric AML.
OBJECTIVE: To compare survival rates in children with AML who at diagnosis are underweight (body mass index [BMI] < or =10th percentile), overweight (BMI > or =95th percentile), or middleweight (BMI = 11th-94th percentiles).
DESIGN, SETTING, AND PARTICIPANTS: Retrospective review of BMI and survival in 768 children and young adults aged 1 to 20 years enrolled in Children's Cancer Group-2961, an international cooperative group phase 3 trial for previously untreated AML conducted August 30, 1996, through December 4, 2002. Data were collected through January 9, 2004, with a median follow-up of 31 months (range, 0-78 months).
MAIN OUTCOME MEASURES: Hazard ratios (HRs) for survival and treatment-related mortality.
RESULTS: Eighty-four of 768 patients (10.9%) were underweight and 114 (14.8%) were overweight. After adjustment for potentially confounding variables of age, race, leukocyte count, cytogenetics, and bone marrow transplantation, compared with middleweight patients, underweight patients were less likely to survive (HR, 1.85; 95% confidence interval [CI], 1.19-2.87; P = .006) and more likely to experience treatment-related mortality (HR, 2.66; 95% CI, 1.38-5.11; P = .003). Similarly, overweight patients were less likely to survive (HR, 1.88; 95% CI, 1.25-2.83; P = .002) and more likely to have treatment-related mortality (HR, 3.49; 95% CI, 1.99-6.10; P<.001) than middleweight patients. Infections incurred during the first 2 courses of chemotherapy caused most treatment-related deaths.
CONCLUSION: Treatment-related complications significantly reduce survival in overweight and underweight children with AML.
CD2 Expression of CD2 in Acute Promyelocytic Leukemia Correlates With Short Form of PML-RARa Transcripts and Poorer Prognosis
Pei Lin, MD, Suyang Hao, MD, L. Jeffrey Medeiros, MD, Elihu H. Estey, MD, Sherry A. Pierce, Xuemei Wang, MS, Armand B. Glassman, MD, Carlos Bueso-Ramos, MD, PhD, and Yang O. Huh, MD
Am J Clin Pathol 2004;121:402-407 Abstract quote
We studied the immunophenotype of 100 cases of acute promyelocytic leukemia (APL) with cytogenetic evidence of t(15;17)(q22;q21), 72 hypergranular (M3) and 28 microgranular (M3v), and correlated the results with molecular and clinical features. Most neoplasms (75/100 [75%]) had a typical immunophenotype: CD13+CD33+CD34–HLA-DR–. CD64, CD2, CD34, and HLA-DR were expressed in 27% (24/88), 23% (22/94), 21% (21/100), and 9% (9/98), respectively. CD34 expression was restricted to M3v; HLA-DR and CD2 were expressed more often in M3v than in M3 ( P < .001). PML-RAR a fusion transcripts were detected by reverse transcriptase–polymerase chain reaction in all 70 patients assessed.
The short form of PML-RAR a transcripts was found more frequently in M3v ( P < .002) and CD2+ APL ( P < .0001) than in M3 and CD2– APL, respectively. The median follow-up was 128 weeks. CD2+ APL was associated significantly with leukocytosis ( P = .004), shorter complete remission duration ( P = .03), and a trend toward shorter overall survival ( P = .07) than CD2– APL.
Overall survival for M3v vs M3 ( P = .68) and short vs long transcripts ( P = .21) was not significantly different. Immunophenotyping is useful for predicting the biologic and clinical behavior of APL.
CHROMOSOMAL CHANGES Prognostic Implications inv(16) or t(16;16) Favorable +8,
Intermediate -7 or del(7q), complex defects Unfavorable Normal chromosomes
Two to three miscellaneous defects
Undetermined Prognostic Impact of Acute Myeloid Leukemia Classification
Importance of Detection of Recurring Cytogenetic Abnormalities and Multilineage Dysplasia on Survival
Daniel A. ArberMD
Anthony S. SteinMD
Nora H. CarterMS
Stephen J. FormanMD
Marilyn L. SlovakPhD
Am J Clin Pathol 2003; 119:672-680 Abstract quote
To evaluate the prognostic impact of acute myeloid leukemia (AML) classifications, specimens from 300 patients with 20% or more bone marrow myeloblast cells were studied. Specimens were classified according to the French-American-British Cooperative Group (FAB), the World Health Organization (WHO), the Realistic Pathologic Classification, and a cytogenetic risk group scheme.
Cases with fewer than 30% blast cells did not have a 5-year survival significantly different from cases with 30% or more blast cells, and survival was similar for the low blast cell count group and cases with multilineage dysplasia and 30% or more blasts.
Categories of AML with recurrent cytogenetic abnormalities of t(15;17), t(8;21), inv(16)/t(16;16), and 11q23 showed significant differences in 5-year survival. No significant difference was identified between AMLs arising from myelodysplasia and de novo AMLs with multilineage dysplasia, but all cases with multilineage dysplasia had a worse survival than all other AMLs and other AMLs without favorable cytogenetics. FAB types M0, M3, and M4Eo showed differences in survival compared with all other FAB types, with M0 showing a significant association with high-risk cytogenetics and 11q23 abnormalities. Other FAB groups and WHO AML, not otherwise categorized subgroups did not show survival differences.
These findings suggest that the detection of recurring cytogenetic abnormalities and multilineage dysplasia are the most significant features of current AML classification.
Combination chemotherapy with maintenance chemotherapy
Bone marrow transplantation for relapse
Long-term follow-up of patients >or=60 yr old with acute myeloid leukaemia treated with intensive chemotherapy.
Oberg G, Killander A, Bjoreman M, Gahrton G, Grimfors G, Gruber A, Hast R, Lerner R, Liliemark J, Mattson S, Paul C, Simonsson B, Stalfelt AM, Stenke L, Tidefelt U, Uden AM, Bjorkholm M; LGMS.
Department of Medicine, University Hospital, Uppsala, Sweden. Gunnar.
Eur J Haematol 2002 Jun;68(6):376-81 Abstract quote
It is still controversial how to treat elderly patients with acute myeloid leukaemia (AML), and results have been poor with most regimens.
We report the long-term results of a randomised study performed by the Leukaemia Group of Middle Sweden during 1984-88 comparing two intensive chemotherapeutic drug combinations. Ninety patients >or=60-yr old with untreated AML were randomly allocated to treatment with daunorubicin, cytosine arabinoside (ara-C), and thioguanine (TAD) (43 patients) or a combination in which aclarubicin was substituted for daunorubicin (TAA) (47 patients). Forty-four patients (49%) entered complete remission (CR), 22/43 (51%) in the TAD group and 22/47 (47%) in the TAA group (ns). The CR rate in patients <or=70 yr of age was 30/42 (71%) and in patients >70 yr 14/48 (29%) (P<0.0001). Early death within 30 d after treatment initiation was more often seen in patients >70 yr than in patients <or=70 yr of age, 40% and 12%, respectively (P<0.005). The median cause-specific survival time was 178 d in the total patient group, and the 2-, 5-, and 10-yr survivals were 22%, 11%, and 8%, respectively.
The cause-specific survival was not significantly different between the two treatment arms. At long-term follow-up >or=10 yr after inclusion of the last patient, 5/90 patients (one in the TAD group and four in the TAA group, respectively) were still alive, four in continuous complete remission and one in second complete remission.
Thus, both treatment regimens appear to have similar efficacy, with a relatively high complete remission rate, and a reasonable survival as compared to other studies including some long-term survivors. However, early deaths are still numerous, particularly in patients above 70 yr of age, and the relapse rate is substantial.
Busulfan plus cyclophosphamide compared with total-body irradiation plus cyclophosphamide before marrow transplantation for myeloid leukemia: long-term follow-up of 4 randomized studies.
Socie G, Clift RA, Blaise D, Devergie A, Ringden O, Martin PJ, Remberger M, Deeg HJ, Ruutu T, Michallet M, Sullivan KM, Chevret S.
Service d'Hematologie Greffe de Moelle and Departement de Bio-Informatique, Hopital Saint Louis, Paris, France.
Blood 2001 Dec 15;98(13):3569-74 Abstract quote
In the early 1990s, 4 randomized studies compared conditioning regimens before transplantation for leukemia with either cyclophosphamide (CY) and total-body irradiation (TBI), or busulfan (Bu) and CY.
This study analyzed the long-term outcomes for 316 patients with chronic myeloid leukemia (CML) and 172 patients with acute myeloid leukemia (AML) who participated in these 4 trials, now with a mean follow-up of more than 7 years. Among patients with CML, no statistically significant difference in survival or disease-free survival emerged from testing the 2 regimens.
The projected 10-year survival estimates were 65% and 63% with Bu-CY versus CY-TBI, respectively. Among patients with AML, the projected 10-year survival estimates were 51% and 63% (95% CI, 52%-74%) with Bu-CY versus CY-TBI, respectively. At last follow-up, most surviving patients had unimpaired health and had returned to work, regardless of the conditioning regimen. Late complications were analyzed after adjustment for patient age and for acute and chronic graft-versus-host disease (GVHD). CML patients who received CY-TBI had an increased risk of cataract formation, and patients treated with Bu-CY had an increased risk of irreversible alopecia. Chronic GVHD was the primary risk factor for late pulmonary disease and avascular osteonecrosis. Thus, Bu-CY and CY-TBI provided similar probabilities of cure for patients with CML.
In patients with AML, a nonsignificant 10% lower survival rate was observed after Bu-CY. Late complications occurred equally after both conditioning regimens (except for increased risk of cataract after CY-TBI and of alopecia with Bu-CY).
Henry JB. Clinical Diagnosis and Management by Laboratory Methods. Twentieth Edition. WB Saunders. 2001.
Rosai J. Ackerman's Surgical Pathology. Ninth Edition. Mosby 2004.
Sternberg S. Diagnostic Surgical Pathology. Fourth Edition. Lipincott Williams and Wilkins 2004.
Robbins Pathologic Basis of Disease. Seventh Edition. WB Saunders 2005.
DeMay RM. The Art and Science of Cytopathology. Volume 1 and 2. ASCP Press. 1996.
Weedon D. Weedon's Skin Pathology Second Edition. Churchill Livingstone. 2002
Fitzpatrick's Dermatology in General Medicine. 6th Edition. McGraw-Hill. 2003.
Weiss SW and Goldblum JR. Enzinger and Weiss's Soft Tissue Tumors. Fourth Edition. Mosby 2001.
Auer rod-Azurophilic linear structure found in the cytoplasm of 60-70% cases of AML. Caused by an alignment of the azurophilic granules. If there is a blast proliferation of 30% or more, one Auer rod in one or more blasts is considered definitive evidence of AML.
Basic Principles of Disease
Learn the basic disease classifications of cancers, infections, and inflammation
Commonly Used Terms
This is a glossary of terms often found in a pathology report.
Learn how a pathologist makes a diagnosis using a microscope
Surgical Pathology Report
Examine an actual biopsy report to understand what each section means
Understand the tools the pathologist utilizes to aid in the diagnosis
How Accurate is My Report?
Pathologists actively oversee every area of the laboratory to ensure your report is accurate
Recent teaching cases and lectures presented in conferences
Last Updated May 5, 2006
Send mail to The Doctor's Doctor with questions or comments about this web site.
Read the Medical Disclaimer.
Copyright © The Doctor's Doctor | 1 | 51 |
<urn:uuid:f2a9448e-546e-408c-9b16-4e9c986a72cc> | OECD countries need growth if they are to emerge from the crisis and create jobs. But where will that growth come from? Also, with challenges such as climate change and global development, how can cleaner, smarter economic activity be unleashed? Answering these questions may help us plot a path out of the crisis and build a safer future.
In the early 1960s when the OECD was created, you could buy a portable TV for $150 in the US, around the same price as you'd pay for an entry-level LCD television today. But 60 years ago, an average manufacturing worker would have had to work almost two weeks to pay for it, compared with less than a day at present. For many other products we use all the time, a comparison like this isn't even possible, since they didn't exist. In 50 years from now, there will be similar changes we can't yet imagine, and people may look back on call centres, shopping malls and the like with the same nostalgia presently reserved for the dirty, dangerous jobs of the past and the little shop on the corner with its limited range of overpriced merchandise. Although we can't predict what the future economy will be like in any great detail, we do have some idea about the forces likely to shape that economy and the sectors that could drive growth in the years to come. There may be spectacular inventions due to progress in science and technology and radical new ways of organising our lives and how we produce and consume, but many of the sources of growth will be more mundane. The basic categories won't change much either. We'll still need to be fed, clothed, housed, educated, transported, treated and entertained.
So what do we know, or what can we guess with a reasonable degree of confidence? For a start, there will be many more of us. This will be a source of growth in itself for the world economy, especially since people will, on average, be richer than today. And not just in OECD countries. That 1960s TV would have been built in the US, and most of its components would have been American. Since then, new world leaders in consumer electronics have appeared in places like Chinese Taipei, and new economic powerhouses are already emerging across the globe.
But emerging countries won't become rich by following the same path as countries that industrialised earlier. The environmental costs would be too high for a start and, to borrow a cliché, there simply are not enough planets to go around for that. Fortunately, new technologies are enabling developing countries to leap forward in their development, the spread of mobile phones being one example, even if the so-called digital divide remains an issue. All countries are now looking for forms of growth that use resources more efficiently. Energy and transport will be among the earliest drivers of greener growth, and this means changes that will affect everything, such as land values, urban planning, farming and where you live. New technologies will play a role in helping the world feed itself too.
Apart from being richer, more mobile and more numerous, the population will also be older. But the new old will have grown up with today's technologies, attitudes and social norms. They, or to be more precise, we, will expect to stay in our own homes as long as possible, so personal services, and specially-adapted housing and consumer products are likely to be in demand.
Discovering and developing new sources of growth will depend on developing the intellectual assets needed to create, promote, diffuse and adopt the intellectual and material innovations underpinning them. Policymakers have to take a lead, by tapping new sources of growth themselves, and setting the regulatory ground to allow new breakthroughs to happen-and breaking down inertias, whether institutional or economic, that prevent them. But most of all, they have to invest in innovation and skills. The OECD is working hard on areas that can help improve government policy, by setting out innovation and green growth strategies, examining market and regulatory incentives, overseeing rules on biotechnology, and much more. None of this would be possible without knowledge.
The roots that allow our future societies to flourish will , as ever, be education, research and training. | 1 | 2 |
<urn:uuid:509f6814-4f6f-4a23-a323-67050eb4eb86> | Bulk means sea.[ILLUSTRATION OMITTED]
Although the wars in Iraq and Afghanistan continue to dominate the headlines, both conflicts are very much land-force-driven operations. However, significant investments into expeditionary and amphibious capabilities by the world's leading navies continue. The last twelve months have shown that the ability to perform ship-to-shore operations to provide humanitarian support in cyclone, flood or earthquake-stricken areas is essential.
Navies are increasingly being called up to assist government agencies, international bodies such as the United Nations and non-governmental organisations in providing assistance and relief in the aftermath of major disasters. Two recent events have demonstrated this. In December 2007 Bangladesh suffered a devastating cyclone. Washington DC reacted quickly and the U$ Navy's Pacific Command immediately despatched a 28-strong humanitarian assistance team to the region. This was quickly followed by the arrival of the USS Kearsarge Landing Ship Dock A ship designed to transport and launch loaded amphibious craft and/or amphibian vehicles with their crews and embarked personnel and/or equipment and to render limited docking and repair services to small ships and craft. Also called LSD. (LSD LSD or lysergic acid diethylamide (lī'sûr`jĭk, dī'ĕth`ələmĭd, dī'ĕthəlăm`ĭd), alkaloid synthesized from lysergic acid, which is found in the fungus ergot ( ) along with the 22nd Marine Expeditionary Unit A Marine air-ground task force (MAGTF) that is constructed around an infantry battalion reinforced, a helicopter squadron reinforced, and a task-organized combat service support element. It normally fulfills Marine Corps forward sea-based deployment requirements. (MEU MEU Marine Expeditionary Unit
MEU Mobile Expansion Unit
MEU Maximum Expected Utility (philosophy, economics)
MEU Municipal Employees Union
MEU Modern English Usage
MEU Main Electronics Unit ).
US Pacific Command had ordered the deployment once it became clear that Cyclone Sidr would hit the Bangladeshi coast. In particular, the Kearsarge played a vital role in supplying fresh water; over 757,000 litres (200,000 US gallons) per day can be filtered on the vessel. This ship was later joined by the USS Tarawa Landing Helicopter Assault (LHA A popular freeware compression program developed by Haruyasu Yoshizaki that uses a variant of the LZW (LZ77) dictionary method followed by a Huffman coding stage. It runs on PCs, Unix and other platforms as its source code is also free. ) ship and the llth MEU from the Western Pacific to continue the relief efforts. The Tarawa had, ironically, assisted disaster relief efforts before in Bangladesh in 1991.
The US Navy also offered assistance to the victims of the Burmese cyclone in early May. The navy's Essex Expeditionary Strike Group The Expeditionary Strike Group (ESG), also known as an Expeditionary Strike Force, is a military concept which was introduced in the U.S. military in the early 1990s and is based on the Naval Expeditionary Task Force. The ESG concept allows the U.S. (ESG ESG Enterprise Strategy Group (Veritas)
ESG Emergency Shelter Grant (Florida, USA)
ESG Expeditionary Strike Group
ESG Electronic Service Guide (used in DVB) ), which consisted of the USS Essex, Harpers Ferry, Janeau and Mustin; a total of two Landing Ship Docks (LSD), a single Landing Platform Dock (LPD See LPR/LPD. ) and a guided missile destroyer, were sailing off the southern coast of the country. Despite the impressive capabilities of the ESG; coupled with the navy's offer to also despatch the USS Mercy hospital ship to the country, the reclusive military junta which rules the country turned down Washington's offer.
Although forces such as the US Navy are directing their attention towards how they can enhance their provision of humanitarian assistance, offensive amphibious operations have not been forgotten. Vehicle manufacturers in particular are turning their attention towards armoured platforms which have amphibious capabilities to not only protect their occupants on land, but also during transit from ship to shore.
General Dynamics European Land Systems (GDELS GDELS General Dynamics European Land Systems ), which builds the Piranha series of armoured fighting vehicles, took advantage of June's Eurosatory exhibition in Paris to unveil the Piranha III High Protection (HP) model. This version of the Piranha III is equipped with robust protection against Improvised Explosive Devices (IED Noun 1. IED - an explosive device that is improvised
I.E.D., improvised explosive device
explosive device - device that bursts with sudden violence from internal energy ) and anti-vehicle mines. Vehicle occupants are protected by energy-absorbing seats, and the Piranha III has also been fitted with an enhanced Caterpillar engine that develops 343 kW (460 hp).
One major benefit of this vehicle is that it retains the amphibious capabilities of the baseline Piranha III version and is capable of operating in conditions of up to Sea State Three. This allows the vehicle to perform ship-to-shore transfer and allows for its occupants to ride in a heavily protected carrier once on land. Using the Piranha IIIHP for amphibious operations means that the troops need not change vehicle (from landing craft to APC (1) (American Power Conversion Corporation, West Kingston, RI, www.apcc.com) The leading manufacturer of UPS systems and surge suppressors, founded in 1981 by Rodger Dowdell, Neil Rasmussen and Emanual Landsman, three electronic power engineers who had worked at MIT. for example). Crucially, the Piranha IIIHP also protects them in the surf and beach zones where troops can be highly vulnerable from attack when travelling in open landing craft or moving across the beach.
The Spanish government has placed a contract with GDELS to procure 21 Piranha IIIC IIIC International Independent Investigation Commission amphibious vehicles to equip the Infanteria de Marina (Spanish Naval Infantry). All of the vehicles are expected to be delivered by 2014. They will all have an amphibious capability, with two propellers positioned on the aft side of the vehicle and a remote-controlled trim vane. The order increases the 18-strong Piranha IIIC force which the naval infantry already has and the new vehicles will be delivered in several versions including ambulance, command and control, engineer and recovery, fire support, reconnaissance and armoured personnel carrier configurations.
To the west, the Corpo de Fuzileiros (Portuguese Marine Corps The Corpo de Fuzileiros (Marine Corps) are the amphibious infantry of the Portuguese Navy. They are in charge of amphibious operations, coastal reconnaissance, boarding operations, defence of naval assets, and humanitarian missions. ) will receive 20 Pandur II 8 x 8s. The order breaks down into ten armoured personnel carriers equipped with a 12.7-mm machine gun, single examples of command and control and engineer and recovery vehicles, two vehicles equipped with a 120-mm mortar, two ambulances and a pair of 30-mm cannon-armed personnel carriers.
The amphibious Pandur and Piranha IIIs are joined by Patria's 8 x 8 Armoured Modular Vehicle (AMV AMV Anime Music Video
AMV Avian Myeloblastosis Virus
AMV Alfalfa Mosaic Virus
AMV Army Motor Vehicle
AMV Assisted Mechanical Ventilation
AMV Armored Maintenance Vehicle
AMV Accredited Meter Verifier
AMV Annulus Master Valve ), which was unveiled in 2007. The vehicle is equipped with a pair of jets that can propel the infantry fighting vehicle infantry fighting vehicle
A heavily armed, armored combat vehicle, having tracks or wheels and often having amphibious capability, used to transport infantry into battle and support them there. through the water. The United Arab Emirates United Arab Emirates, federation of sheikhdoms (2005 est. pop. 2,563,000), c.30,000 sq mi (77,700 sq km), SE Arabia, on the Persian Gulf and the Gulf of Oman. placed an order for the Patria AMV in January 2008, which will give that country's armed forces an amphibious vehicle capability, although the number to be delivered has not been reported.
Meanwhile, the Indonesian government hopes to acquire Russian BMP-33 amphibious tanks, with around twenty units expected to be purchased. However, following the election of Russian President Dimity dim·i·ty
n. pl. dim·i·ties
A sheer, crisp cotton fabric with raised woven stripes or checks, used chiefly for curtains and dresses. Medvedev, the deal was reported to be on hold as the Russian government reconsiders the credit terms for the deal that was originally offered by President Vladimir Putin's administration to Jakarta. The deal has a sense of urgency following the loss of a BTR-50PK amphibious APC that Indonesian Marines were using during an exercise which caused the death of six troops.
Brazilian amphibious capabilities received an important enhancement in late 2007 with the government announcing that a new 6 x 6 amphibious armoured personnel carrier would be purchased for the Brazilian Army. Constructed by Fiat do Brazil, this new vehicle is known as the VBTP-VR and 16 will initially be built, before a larger contract is announced to replace the country's Urulu 6 x 6 amphibious APCs.
In terms of ship-to-shore landing craft, the Armada Bolivariana de Venezuela (Venezuelan Navy) is to acquire nine Griffon 2000TD hovercraft, which are currently being assembled in-country. The procurement of the hovercraft represents the navy's reorientation of part of the country's Division de Infanter a de Marina (Marine Infantry) towards riverine operations, with a new battalion of troops being raised for these tasks. The hovercraft will supplement the Capana class Landing Ship Tanks (LST LST left sacrotransverse (position of fetus). ) which the navy has used since 1984.
In January 2008, Navantia delivered the last LCM-1E class landing craft to the Armada Espanola (Spanish Navy). The service has acquired twelve of the vessels which were ordered in 2004. The landing craft have a roll-on/roll-off design and can carry around 100 tonnes at speeds of up to 22 km/h (twelve knots) over 296 km (160 nm). The company hopes to secure additional orders of the craft from Australia to accompany that country's Strategic Projection Ship. The LCM-1E class vessels can be carried in the Navy's two Galicia class and single Juan Carlos 1 LPDs (see below).
The Netherlands is also investing in new landing craft, and in 2007 the Dutch government signed a contract with Visser Scheepswerf for the provision of twelve Vehicle Personnel Landing Craft (LCVP LCVP Landing Craft, Vehicle, Personnel
LCVP Vehicles/Personnel Landing Craft Mk 5c) vessels to be supplied to the Royal Netherlands Marine Corps The Korps Mariniers is the marine corps of the Netherlands and is part of the Royal Netherlands Navy. The Dutch Marine Corps is highly disciplined and trained to operate anywhere in the world under any condition. . These landing craft will equip the Rotterdam class LPDs (see below).The LCVP Mk 5c craft are to be completed this year and the entire class will have entered service by 2011.The landing craft will eventually replace the twelve L9530 LCVP Mk 2 and L9536 Mk 3 vessels that the navy currently operates and will augment the services' L9525 Landing Craft Utility Landing Craft Utility (LCU) are used by amphibious forces to transport equipment and troops to the shore. They are capable of transporting tracked or wheeled vehicles and troops from amphibious assault ships to beachheads or piers. (LCU) Mk 2 vessels which have been upgraded to each carry a pair of Leopard-2A6 Main Battle Tanks.
In July 2007, the final Griffon-8100TD hovercraft was delivered to the Swedish Defence Materiel Administration The Swedish Defence Materiel Administration (Försvarets materielverk, FMV) is a Swedish government agency that reports to the Ministry of Defence. The agency is responsible for the supply of materiel to the Swedish defence organisation. It is located in Stockholm. to equip the navy's Amphibious Battalion. So far the country has purchased three of these hovercraft, acquiring the first two in 2006 and 2007 respectively. The vessels are equipped with a glass cockpit and triple-redundant controls. They are the largest hovercraft built by Griffon and can move a payload of ten tonnes at speeds of up to 74 km/h (40 knots) although unladen unladen adj [weight] → vacío, sin cargamento
unladen adj [ship, weight] → à vide
unladen adj [ the hovercraft can reach speeds of 93 km/h (50 knots).
On 30th May 2008, EPS (Encapsulated PostScript) A PostScript file format used to transfer a graphic image between applications and platforms. EPS files contain PostScript code as well as an optional preview image in TIFF, WMF, PICT or EPSI, the latter being an ASCII-only format. of the United States announced a contract to build two M-10 hovercraft for the Saudi Arabian Border Guard which will be delivered in 2009. The vessel can travel at speeds in excess of 93 km/h in conditions of up to Sea State Four and can lift up to eleven tonnes. This can translate into the carriage of light vehicles or up to 70 troops. The construction of the vessel uses weight-saving materials such as fibre reinforced plastic and Kevlar. The hovercraft has been designed for over-the-horizon landing operations and patrol duties in swamp and littoral littoral /lit·to·ral/ (lit´ah-r'l) pertaining to the shore of a large body of water.
pertaining to the shore. areas. The vessel also boasts a low radar signature and a range in excess of 926 km (500 nm).
The Turkish government announced in August last year that the country would acquire up to eight Landing Craft Tank The Landing Craft, Tank (Landing Craft Tank) was an amphibious assault ship for landing tanks on beachheads. The first examples appeared during the Second World War. They were used by the Royal Navy and U.S. Navy in World War II. (LCT) vessels to equip the Turk Deniz Kuvvetleri (Turkish Navy). The Turkish navy already has 25 C-117 class LCTs in service, along with 17 C-302 class mechanised Adj. 1. mechanised - using vehicles; "motorized warfare"
mobile - moving or capable of moving readily (especially from place to place); "a mobile missile system"; "the tongue is...the most mobile articulator"
2. landing craft (LCM (Liquid Crystal Monitor) A flat panel display that uses the liquid crystal (LCD) technology. See flat panel display. ). The new LCTs will replace the C-117 vessels and eight vessels will be purchased to this end. The landing craft are expected to lift in excess of 200 tonnes, which can include up to 260 personnel and equipment, or alternatively three MBTs, along with three tons of ammunition. The LCT acquisition comes at a time when the Turkish Navy is performing a wholesale overhaul of its amphibious capabilities, with the country in the market for a single large LPD, two Landing Craft Personnel Vehicles, four LCMs and up to 30 amphibious assault vehicles.
A number of companies responded to the Turkish government's request for information including DCNS DCNS Deputy Chief of Naval Staff
DCNS Distributed Computing and Network Services and Constructions Industrielles de la Mediterranee of France, along with Merwede of the Netherlands, Hanjin Heavy Industries of South Korea and Navantia. Other European shipbuilders are also offering proposals including Fincantieri, ThyssenKrupp Marine and Downey Engineering. Around twelve local companies have also shown interest in meeting the requirements, and these firms could emerge as possible partners for the European suppliers to help fulfil Ankara's desire to have a high degree of domestic involvement in these landing craft projects.
Amphibious Support Ships
Several acquisition and development programmes for large amphibious support ships are ongoing around the world. In Spain, Navantia was celebrating the launch of the largest vessel to enter service with the Armada Espafiola, the Juan Carlos 1 (see Armada 3/2008, page 81). The eponymous Spanish monarch attended the launching ceremony of this new LPD on 10 March. The vessel has a flight deck that can accommodate AV-8B combat and V-22 tilt rotor aircraft, plus NH90, CH-47 and AB-212 helicopters. The ship is widely expected to replace the Pizarro and Hernan Cortes Newport class US Navy LSTs which were acquired by the Spanish Navy in the mid-1990s.
In the United Kingdom the Royal Navy performed the final stage of its Royal Fleet Auxiliary The Royal Fleet Auxiliary (RFA) is a component of the Naval Service that keeps the Royal Navy of the United Kingdom running around the world. Its main function is to supply the Royal Navy with fuel and supplies. LSD renewal programme. The senior service took receipt of its last Bay class vessel, the RFA RFA right frontoanterior (position of the fetus).
Radiofrequency ablation (RFA)
A procedure in which radiofrequency waves are used to destroy blood vessels and tissues.
Mentioned in: Prenatal Surgery Lynne Bay, which was formally commissioned on 8 August 2007. The vessel joins its sister ships the RFA Largs Bay, RFA Mounts Bay and RFA Cardigan Bay.
Meanwhile, the Koninklijke Marine (Royal Netherlands Navy This article is about the Royal Navy of the Netherlands. For other Royal Navies, see Royal Navy (disambiguation).
The Koninklijke Marine (Royal Netherlands Navy ) is the navy of the Netherlands. ) performed the final series of sea trials with the HNLMS HNLMS His/Her Netherlands Majesty's Ship (prefix for Dutch naval ships) Johan de Witt Johan de Witt (September 24, 1625, Dordrecht–August 20, 1672, The Hague) was a key figure in Dutch politics at a time when the Republic of the United Provinces was one of the Great Powers in Europe, dominating world trade and thus one of the wealthiest nations in the world. LPD before the vessel is formally commissioned. The Johan de Witt will join the HNLMS Rotterdam and is very similar to its sister vessel, except that the former is also equipped with command and control facilities.
South Africa has launched the Project Millennium programme, which is tasked with procuring a Landing Helicopter Dock LHD is an acronym for the Large Helical Device and left hand drive, a vehicular transport term
LHD, for Landing Helicopter Dock, is the US Navy hull classification symbol for multipurpose amphibious assault ships of the Wasp class. vessel to perform a number of missions. Project Millennium is currently a feasibility study "A Feasibility Study" is an episode of the original The Outer Limits television show. It first aired on 13 April, 1964, during the first season. It was remade in 1997 as part of the revived The Outer Limits series with a minor title change. into exactly the type of vessel that will be acquired. The South African Navy The South African Navy (SAN), is the navy of South Africa. History
The South African Navy can trace its official origins back to the SA Naval Service, which was established on 1 April 1922. is expected to acquire at least two vessels each displacing around 20,000 tonnes. The ships would perform a number of missions including sealift, humanitarian support, search and rescue co-ordination and joint-force command and control. The acquisition is expected to cost up to $1.16 billion per vessel and the ships will enter service around 2013. The South African Navy is reported to have several existing vessel types under consideration, including the Mistral class, Australia's Strategic Projection Ship or ThyssenKrupp Marine Systems' concept MHD 150.
In Latin America, towards the end of 2007, Brazil took delivery of the former HMS Sir Galahad Three ships of the Royal Navy and Royal Fleet Auxiliary have been named Sir Galahad for the perfect knight of Arthurian legend.
a. From the bow of a ship to the stern; lengthwise.
b. In, at, or toward both ends of a ship.
2. In or at the front and back. access ramps.
To the west, the Armada de Chile (Chilean Navy) has a need for a roll-on/roll-off vessel which could be used to support amphibious operations. The vessel will be required to have a displacement of up to 10,000 tonnes and a flight deck that can accommodate up to four helicopters. However, Chile's requirement is not expected to be fulfilled with a new-build vessel and will instead be met by a ship purchased on the international second-hand market.
Following the devastating Tsunami that hit the Indian Ocean region in 2004, the Tentera Laut DiRaja Malaysia (Royal Malaysian Navy This article is about the Royal Navy of Malaysia. For other Royal Navies, see Royal Navy (disambiguation).
Royal Malaysian Navy (RMN) (Malay: Tentera Laut DiRaja Malaysia (TLDM)) is the naval arm of Malaysia's armed forces. ) has articulated a desire to purchase LPD class ships to assist with military operations and humanitarian missions around the region. A request for proposals for up to three LPD vessels is expected imminently and European shipyards in France, Spain and the Netherlands are likely to offer proposals in addition to the Hanjin Heavy Industries shipyard in South Korea. It is anticipated that the vessels would enter service by 2015.
Canada, meanwhile, has had a longstanding requirement for an amphibious operations support vessel. The vessel is to be procured to allow the Canadian Forces Maritime Command Canadian Forces Maritime Command (MARCOM) is the naval service of the Canadian Forces, and as such, it is also the senior service of the Canadian Forces, following the tradition that comes from the Royal Navy. to rapidly deploy around 800 troops. However, plans to acquire the ship have been postponed until after 2011, despite the vessel being essential for Canada's plans to raise a Standing Contingency Force to support expeditionary operations.
Ottawa is currently engaged in the Joint Support Ship programme with BAE Systems, SNC-Lavalin Profac and ThyssenKrupp Marine Systems ThyssenKrupp Marine Systems (often abbreviated TKMS) is a leading European group of providers of naval vessels, surface ships and submarines. It was founded when ThyssenKrupp merged with Howaldtswerke-Deutsche Werft on January 5, 2005. all offering proposals. The plans call for three 28,000-tonne ships to enter service by 2016. The ships will accommodate up to three helicopters along with vehicles and containers. The vessels will also have a 60-bed hospital and facilities for up to 75 personnel to staff a joint operational headquarters. In order to finance the procurement Canada will decommission de·com·mis·sion
tr.v. de·com·mis·sioned, de·com·mis·sion·ing, de·com·mis·sions
To withdraw (a ship, for example) from active service. the HMCS HMCS
Her (or His) Majesty's Canadian Ship Protecteur and Perserver auxiliary oil replenishment ships, together with several aircraft. Downselect of the successful proposal is expected this year, with the first vessel scheduled to enter service in 2012.
The US Navy is continuing its investment into its amphibious support ships. In early July, Northrop Grumman announced that the USS Green Bay One vessel of the United States Navy has been named USS Green Bay, after the city of Green Bay, Wisconsin, and another is under construction.
In December 2007, the company won a contract worth $ one billion to construct the ninth vessel in the San Antonio class. The ship will be called USS Somerset and continues from the USS Mesa Verde, which was commissioned in December 2007, and the USS New York There have been at least six United States Navy ships that have borne the name New York, after the 11th state. See USS New York City for those named after the City. , which was formally named in the same month. In late June 2008 the US Navy announced that the first of its new series of LHA ships would be christened USS America once the ship, the lead vessel in the America class, is delivered to the US Navy in 2012. The ship was to have originally been called the USS Gerald Ford bur a concerted campaign to petition Donald C. Winter, the Secretary of the Navy, triggered the name change. The name continues the tradition of the Kitty Hawk class aircraft carrier The Kitty Hawk-class supercarriers of the United States Navy were an incremental improvement on the Forrestal-class vessels. Four were built, all in the 1960s:
The US Office of Naval Research The U.S. Office of Naval Research (ONR), headquartered in Arlington, Virginia (Ballston), is the office within the U.S. Department of the Navy that coordinates, executes, and promotes the science and technology programs of the U.S. is looking at acquiring the so-called Sea Base Connector-Transformable Craft (T-Craft) which will be a high-speed vessel to transport and unload material over distances of up to 4700 km (2537 nm).The craft could load and offload cargo onto amphibious support vessels such as LPDs, and also have the wherewithal to transport equipment across the beach as a truly 'go-anywhere' long-range landing craft. The motivation behind US Navy's acquisition of the T-Craft is because its existing landing craft and Landing Air Cushioned Craft hovercraft have to be transported in the well deck of an LPD until they are within suitable range of the beachhead, which uses up value space inside the amphibious support ships.
However, existing landing craft carry a relatively limited amount of cargo, up to 68,000 kg in the case of the Lcac, and rely on good sea conditions. The US Navy also wants to increase the stand-off distances for the amphibious support ships moving them beyond current landing craft ranges to distances of around 463 km (250 nm) from the beachhead whenever possible.
The Office of Naval Research is looking to industry to develop a craft that would have this range and could travel, when loaded, at speeds of up to 74 km/h (40 knots) while carrying loads of up to 680 tonnes. The craft must also be able to drive across the beach when landed, be capable of travelling at up to 37 km/h (20 knots) in Sea State Five and must be survivable in conditions of up to Sea State Eight. The craft must also be able to load and unload cargo from the amphibious support vessel in conditions of up to Sea State Four.
Three consortia are working on the programme including the Alion Science and Technology-led team which comprises JJMA JJMA John J. McMullen Associates Maritime and Industrial Engineering Group, Nichols Boat Builders and Raytheon Integrated Defense Systems Raytheon Integrated Defense Systems, a subsidiary of Raytheon Company, is headquartered in Tewksbury, MA. Its president is Dan Smith. It has more than 12,700 employees. External Links
About IDS , which is looking at an air-cushioned design as one way to achieve the ONR's requirements. Meanwhile, Umoe Mandal has teamed with General Atomics, Kiewit Offshore Services, Griffon Hovercraft, Island Engineering, Fireco and Wamit. The company is experimenting with hovercraft technology and the Surface Effect System (SES), used on the Skjold class fast patrol boats operated by the Royal Norwegian Navy This article is about the Royal Navy of Norway. For other Royal Navies, see Royal Navy (disambiguation). , to develop a hybrid vessel which would use both an air-cushioned and SES design. Meanwhile, a hybrid air-cushion and catamaran design is being investigated by a team including Textron Marine and Land Systems, Mino Marine, the Littoral Research Group, L-3 Communications, NSWC Noun 1. NSWC - the agency that provides scientific and engineering and technical support for all aspects of surface warfare
Naval Surface Warfare Center Panama City and CDI Marine. The ONR will choose a winning design for a contract worth $150 million to develop a full-sized demonstrator that will be ready for trials around 2013.
Meanwhile, the US Army has announced a major restructuring of its watercraft fleet. As part of an initiative known as Joint Logistics Over The Shore (Jlots), the army is scheduled to operate 148 vessels by 2013 which will include high-speed boats capable of landing a Stryker armoured vehicle. The force will also acquire up to 34 landing craft and eight 83-metre-long logistics vessels. Floating causeway and causeway ferries will also be purchased.
The US Army's current watercraft fleet includes the LCU-2000 52-metre landing craft and the LCM-S 22-metre landing craft, most of which were used during the Vietnam War and were based around World War Two-era designs. The vessels are part of the 7th Transportation Group at Fort Eustis, Virginia and the boats were active during Operation Iraqi Freedom, providing lift for Marine Corps and Special Operations troops during amphibious missions. In co-operation with the ONR's T-Craft programme, the army is looking at a larger vessel that could transport more troops and materiel over longer distances, preferably up to 300 troops or 23 Stryker vehicles. To this end the Army has performed trials, in conjunction with the US Navy, with the Joint Venture, a catamaran owned by the Isle of Man Steam Packet The Isle of Man Steam Packet Company is the oldest continuously operating passenger shipping company in the world, celebrating its 175th anniversary in 2005.
The company provides freight, passenger and vehicle services between the port of Douglas, Isle of Man and four ports Company. The vessel served as a command ship for humanitarian operations during the 2005 Pakistan earthquake and also supported special ops' missions during Operation Iraq Freedom.
The US Marine Corps is running into trouble trying to replace its AAV7A1 amphibious landing vehicle with the new Expeditionary Fighting Vehicle The Expeditionary Fighting Vehicle (EFV) is the newest USMC amphibious vehicle, intended for deployment in 2015.<ref name="NAVWAR" /> It was renamed from the Advanced Amphibious Assault Vehicle in late 2003. The USMC wants 1,013 AAAV's by 2015. (EFV). The new design is intended to carry up to 17 troops at speeds of 46 km/h (25 knots) at sea. The vehicles are intended to have a range of around 40 km (21 nm) from ship-to-shore. The corps has maintained that the EFV, with its flat-bottomed hull, has the best design. However, members of the US Congress have disagreed, arguing that the current design lacks adequate protection against anti-vehicle bombs. Instead, they have urged the Marine Corps to considera V-shaped hull design, although members of the Corps have argued that this will reduce the vehicle's sea performance. Current prototypes being trialled by the Marine Corps have also suffered around one failure for every four and a half hours of operation.
Other problems include the cost increases for the EFV programme, which have risen by almost 30% with each vehicle predicted to cost around $17 million, with over 570 units expected to be purchased. General Dynamics Land Systems, which is developing the EFV, has also suffered problems in developing a propulsion system that can move the EFV through the water at the required speed. Moreover, the vehicles were originally supposed to enter service in 2008, however they will not enter the US Marine Corps inventory until 2015 at the earliest.
The range and depth of missions that navies are expected to perform in terms of amphibious operations has increased significantly over the last fifteen years. Naval operations to provide humanitarian relief for victims of natural disasters in Asia anal around the Indian Ocean region following the 2004 Tsunami will be repeated in the future.
High-intensity military amphibious operations are here to stay. Looking back at the landings on the Al Faw peninsula at the start of Operation Iraqi Freedom in 2003, one sees that this action could be a harbinger of things to come. It is not inconceivable that a country's armed forces may have to fight their way from the sea to their objectives on land, no matter how dangerous such an operation might be.
Fortunately, recent and new designs of armoured vehicles offer both a means of protected transport, an amphibious capability that can get troops to the shore in relative safety, and a protected environment once on land. The recent acquisitions of hovercraft by Sweden and Saudi Arabia, plus the US T-Craft initiative, underline the importance of being able to reach the shore at speed. The less time a squad of troops or their equipment spends at sea, the smaller its window of vulnerability A window of vulnerability or wov is a time frame within which defensive measures are reduced, compromised or lacking.
The term is used with reference to military defences of strategic assets, and also by analogy in computer software to a software vulnerability which is open to attack. Finally, navies across the world are also recognising the capabilities offered by large amphibious support combatants, not only to provide a means to transport troops and materiel, but also to provide a mobile air base, deployable command and control facilities and support services such as hospitals and drinking water filtration systems.
From Russian On a Cushion
Russian has a strong history in the construction of civilian and military hovercraft. Amongst the military models offered to the export market by Rosoboronexport art the Zubr (Bison) and the Murena-E (Moray). At 550 tonnes, the Almaz Zubr (left simply is the world's largest hovercraft, has a range of over 216 nm, a top speed of 60 kts and can land three main battle tanks. Also from Almaz, the smaller Murena-E tips the scales at 150 tonnes with a payload of 24 tonnes. It will carry 130 men over a distance of 200 nm at a cruise speed of 50 kts. | 1 | 3 |
<urn:uuid:694b2dba-fa43-432f-acc2-a35e607f5b13> | There exists a certain amount of confusion today about what money truly is, how it originated and who should produce it (the government or private individuals). For this reason, it is useful to provide a brief summary of the origin or money and the differences between the various types of money. In this manner it will become clear that money should only be produced by the market.
According to Ludwig von Mises [i], money evolved from the practice of indirect exchange. Indirect exchange is where the seller of a particular good sells his good for another good, not for the purposes of consuming that that second good but because it is highly marketable. In other words, now that he has obtained this highly marketable good, he has full confidence that he can now sell it to obtain the consumption goods he ultimately desires. This highly marketable good is the common medium of exchange and is generally known as money. There are secondary functions (store of value, measure of value, etc.) but these merely derive from the medium of exchange function.
The question remains, why is this good so highly marketable in the first place? What original characteristics made it so desirable for people to use it as money?
To answer this question we must define what a good is. Carl Menger[ii] identifies the following prerequisites for a good:
- A human need for the item
- Capacity for the item to satisfy this need
- Human knowledge that the item can satisfy this need
- Sufficient control of the item such that one can satisfy their need.
Absent one or all of these prerequisites the thing ceases to become a good. Menger also notes that some items are treated by people as though they were goods even though they lack all four of these prerequisites. This occurs when attributes are “erroneously ascribed to things that do not really possess them” or when “non-existent human needs are mistakenly assumed to exist”. Menger called such items imaginary goods.
Next we must determine what makes a good valuable. Menger[iii] makes it clear that there are two qualities that imbue a good with value. The first is that it should be an economic (i.e. scarce) good. In other words, the requirements (or demand) for a good must be greater than the quantity of the good available. Second, men must be “conscious of being dependent on command of them for the satisfaction of our needs”. To summarise, only scarce goods which we know can satisfy our needs have value.
Now we know what a good is and what gives it value, but what makes it useful as money? According to Jorg Guido Hulsmann[iv], to be used as money the good must be marketable. It must be a commodity; i.e. a valuable good that can be widely bought and sold. One must know that if they sell their produce and receive this commodity in return, that they can instantly sell this commodity to obtain the goods they desire (i.e. food, clothing, etc.).
The monetary use of a commodity is derived from its non-monetary use. When we consider how money comes into being (through indirect exchange) we know this must be the case. This is because (as Hulsmann[v] tells us) the prices initially being paid for a commodity’s non-monetary use allow one to estimate the future price for the commodity when it is resold. This is the basis for its use in indirect exchange.
In the case of gold or silver, it is obvious that these commodities have a value independent of their monetary use. Gold has historically primarily been used as jewellery and today, like silver, it has many industrial uses that establish a non-monetary value.
It is clear now that paper money established by government fiat cannot have any non-monetary value. It is not a good (according to the definition by Menger) or a commodity that can be widely bought and sold. No man desires paper money for its own sake. It cannot satisfy any need of man. As such, the quantity available infinitely exceeds the requirements for it. It is valueless. It is arguably, an imaginary good, as described by Menger. Value has been attributed to it by the government even though none exists.
Paper money is useless to individuals and is only truly useful to the government which can use it to more easily tax us. But if fiat currency has no value why then do people accept it in payment for goods and services rendered?
Over time people became accustomed to accepting “paper” money certificates having previously received and transferred warehouse receipts in the form of banknotes. Nominally, these banknotes were backed by gold and people were generally confident of receiving gold from banks should they wish to redeem the banknote for such. (In truth, however, banks, generally holding fractional reserves, strongly discouraged their customers from redeeming their banknotes).
Later, the practice of fractional reserve banking in which such banks issue banknotes only partially backed by specie was legalised. In time only one bank (i.e. the central bank) was granted a monopoly on the issuance of banknotes governed by a gold standard in which each banknote can be exchanged for a fixed amount of gold.
This bank note monopoly would be reinforced with legal tender laws, put in place by the government. Having taken control of money in this way, the government can “fiddle” the money supply in its favour by manipulating the gold standard (by arbitrarily fixing the exchange rate between bank notes and gold) until finally specie payments are permanently suspended. At this point, the population has already become accustomed to paper money and whether or not it is backed by gold no longer seems important to them. There is no significant protest of what is in effect, an appalling violation of property rights. In the final stage, governments completely remove the gold backing from banknotes, granting them a new and powerful method of taxing the population.
Some critics argue that paper money has value not because of the government but because someone will always accept it. This of course does not take in account the progression described above nor does it consider what would happen in a free market of money. Were the government to cease its intervention in the money market people would attempt to hoard hard money (gold, silver, etc.) and spend only the paper money in an attempt to rid themselves of this worthless “currency”. Everyone would want to spend the paper money and no one would want to accept it. The value of paper money would quickly fall to zero in a free market. Paper money has nominal value today because the government has full control of money production.
Misconceptions of money
Confusion concerning the difference between gold money and paper money is common. To some money is money and what does it matter whether it is made of gold or paper? Going further, some observers suggest that the best way to determine which money is superior is to allow fiat paper money and gold money to circulate in the free market and see what happens. This is nonsense. As we have seen above, paper money has no value and without government support would vanish very quickly. Further, in a free market, there would be no such thing as fiat money.
A further misconception concerns the gold standard. There are those who propose that our monetary problems would be solved if we would only return to a gold standard. Often it seems that people confuse gold money with a gold standard. They are not the same. A gold standard is fundamentally a legal tender law established by the government. It sets up an exchange rate between banknotes and specie (gold) which can be modified to suit the government and suspended at will (in times of war for example) in order to raise funds via inflation or protect favoured banks from bankruptcy.
There are those who consider money to be credit and vice versa. While credit can conceivably serve as part of an indirect exchange (Hulsmann[vi]), it is not money per se. It has certain disadvantages when compared to commodity money. For example, credit is not homogeneous but can vary in terms of maturity, interest rate, amount, and of course the creditworthiness of the borrower. Credit money is unlikely to be widely traded by individuals since it carries credit risk (i.e. the risk that the borrower will be unable of repaying the credit note). Thus, it is unlikely that credit money will ever arise on the free market as the primary money. Rather, it will remain the primary province of investors and money lenders.
Why should money be produced by the market and not the government?
Money should and can only be produced by the market. The market will select the most efficient valuable commodity (gold, silver, etc.) as the optimal money. This protects individuals from the costs of monetary manipulation by government (including the ultimate results we are witnessing now, the collapse not just of major banks but also the governments who are their clients). Market selected money also reduces the likelihood and severity of the business cycle as it places a significant constraint on the fraudulent operations of fractional reserve banks.
Fiat paper money produced by the government represents a massive violation of people’s property rights and effectively amounts to fraud, counterfeiting and theft on a grand scale. There can be no rational ethical or economic argument in favour of government intervention in money. Fiat paper money is the tool by which government surreptitiously transfers wealth from the general population to itself or those whom it favours.
Can gold ever be inflationary?
Inflation is properly defined as an increase in the number of banknotes that is not backed by specie (i.e. gold). Defined thus, we can see immediately that an increase in gold does not cause inflation or result in the business cycle. As Murray Rothbard[vii] tells us and as discussed above, gold provides a non-monetary value in addition to its monetary value, and so an increase in gold implies an increase in the wealth of society (greater amounts of gold for industrial, medical or consumer purposes). Will prices of other goods in terms of gold increase? Possibly, but now we can see the confusion that can occur as a consequence of erroneously defining inflation as merely a rise in prices. An increase in gold would be no more an issue than an increase in the supply of iron ore, oil or any other critical raw material.
Inflation is a result of some form of fraud (fractional reserve banking) or counterfeiting. Consider the recent stories of tungsten filled gold bars – if true, then someone is getting something for nothing. The buyer of the gold bars is paying in anticipation of receiving the value of a certain quantity of gold but in reality is receiving significantly less. The buyer is receiving a “fraction” of the value he expects. The value of this “gold” bar has been inflated and losses will result. It follows therefore that losses will result from the fractional reserve system of banking, especially when the buyer of a gold certificate discovers that there is insufficient gold to cover the value of his certificate.
To conclude, we have found that the optimal money derives its value from its prior non-monetary use (i.e. that of being a valuable commodity). Paper money has no prior non-monetary use and thus derives its value from government legal tender laws. In other words, it has merely an imagined value. In free market, there would be no fiat paper money. Government has no place in the production of money. Free money protects the population from the costs of fractional reserve banking and stunts the growth of government. Furthermore, with free market gold money (or similar) inflation will be limited to the illicit activities of fractional reserve banks thus the length and depth of the business cycle will be greatly reduced.
Ludwig von Mises, The Theory of Money and Credit
(New Haven: Yale University Press, 1953) 30-37.
[ii] Carl Menger, Principles of Economics (Ludwig von Mises Institute, 2007) 52-53.
[iii] Ibid. 114-115.
[iv] Jorg Guido Hulsmann, The Ethics of Money Production (Ludwig von Mises Institute, 2008) 23-24.
[vi] Ibid. 28-29.
[vii] Murray Rothbard, The Mystery of Banking (Ludwig von Mises Institute, 2008) 47-48.
Previously published at Paper Money Collapse on Wednesday, 4 October.
In today’s Financial Times Mark Williams argues that the recent correction in gold means the gold “bubble” is finally bursting. Unfortunately, he does not provide a single reason for why the 10-year bull market in the precious metal constitutes a “bubble”, nor why this rally must end now.
According to the narrative of this article, investing in gold must have always been quite an irrational endeavour. Such folly was simply made easier with the advent of liquid ETFs (exchange-traded funds), which made the gold market more accessible to the small investor and trader. From than on, an irrational rally must have just fed on itself. Quote Mr. Williams:
“By 2005, more and more investors tried to rationalise why gold was no longer a fringe investment. It was a hedge against a weak dollar, global turmoil, incompetent central bankers and inflation. As trust in the financial system declined, gold would naturally rise, they reasoned.”
How silly! How could they believe that?
So according to Mr. Williams, gold has been going up because….it had been going up before. The investors simply rationalized it with hindsight. But gold recently went down, and down quite hard. Measured in US dollars, gold is down 16% from its peak on September 5. And now it has to go down further, so reasons Mr. Williams. If people bought it because it was going up, they must now sell it because it is going down.
Toward the end of his article we get the usual bon mot – by now repeated ad nauseam by Warren Buffett – that gold does not produce anything, does not create jobs, and does not pay a dividend. Yawn.
Gold is money
To compare gold with investment goods is wrong. Gold is money. It is the market’s chosen monetary asset. It has been the world’s foremost monetary asset for thousands of years. It has been remonetised over the past 10 years as the global fiat money system has been check-mating itself into an ever more intractable crisis. Faith in paper money as a store of value is diminishing rapidly. That is why people rush into gold. It doesn’t replace corporate equity or productive capital. It replaces paper money.
At every point in time you can break down your total wealth into three categories: consumption goods, investment goods and money. If you buy gold as jewellery, it is mostly a consumption good. If you buy gold as an industrial metal to be used in production processes, it is mostly an investment good. However, most people buy gold today as a monetary asset, as a store of value that is neither a consumption good nor an investment good. Therefore, you have to compare it with paper money. That is the alternative asset.
The paper dollars and electronic dollars that Mr. Bernanke can create at zero cost and without limit, simply by pressing a button, equally do not produce anything, do not create jobs, and do not pay dividends either. Although, sadly, the reflationists and advocates of more and more quantitative easing – many of them writing for the FT – seem to think that this is what paper money does. Alas, it doesn’t. It only fools the public into believing that lots of savings exist that need to be invested, or that enormous real demand exists for financial and other assets. Expanding money is a trick that is beginning to lose its magic.
The dollars in your pockets do not generate a dividend, neither does the gold in your vault or your ETF. So why do you even hold money?
Because of uncertainty. You want to stay on the sidelines but want to maintain your purchasing power without spending it on consumption and investment goods in the present environment and at current prices.
Stocks, bonds and real estate have been boosted for decades by persistent fiat money expansion. Now that the credit boom has turned into a bust it is little wonder that people are reluctant to buy more of these inflated assets. (Some real estate and some stock markets are currently already deflating, which is urgently needed. But bonds are not. If there is a “bubble” at all, it is in government bonds, although that bubble seems to begin to deflate as well — one European sovereign at a time.)
People want to preserve spending power for when the bizarrely inflated debt edifice has finally been liquidated and things are cheap again. But policymakers and their economic advisors do not want that to happen (“Oh no, that dreadful deflation! No! Anything but a drop in prices!”) and they are using the printing press to avoid, or better postpone the inevitable at all cost, even at the cost of destroying their own paper money in the process. And that is why you cannot hold paper money and have to revert to eternal money: gold.
Gold versus paper money
Mr. Williams quotes the market value of the world’s largest gold ETF, GLD, at $65 billion at present, apparently considering this already proof of how mad things have become in the world of gold investing. Well, consider this: in just the first 8 months of the year 2011, Bernanke created $640 billion – out of thin air – and handed it to the banks. Since the collapse of Lehman Brothers, the Fed has created reserve dollars to the tune of $1,800 billion, or more than twice as much as the Fed had created from its inception in 1913 up to the Lehman collapse in 2008. Or, if you like, 27 GLDs at present market value. The money supply in the M2 definition has gone up also by $1,750 billion since Lehman. Mr. Williams, why is anybody still holding these absurd amounts of paper cash? Isn’t that the more interesting question rather than the tiny amounts that they hold in the form of physical gold?
The biggest owner of gold is allegedly the United States government. I say ‘allegedly’ because they have not done a proper audit for a while. Supposedly, the U.S. has 261 million ounces of gold in their vaults at Fort Knox. At current market price that is a market value of $423 billion. Bernanke created more paper money between last Christmas and last Easter!
And those who, like Mr. Buffett, feel like joking that the entire stock of gold fits under the Eiffel tower – ha! ha! ha! – let they be reminded that the trillions that Mr. Bernanke created fit on the SIM cards in their mobile phones. It is all electronic money – and when Mr. B turns into a monetary Dr. Strangelove and goes bonkers with those nuclear buttons, there will be much more fiat money around.
Let me be clear on this point: the fact that money today consists of paper or is even immaterial money and consists of no substance at all is, no pun intended, immaterial to me. It doesn’t matter. As an Austrian School economist, the concept of “intrinsic value” that some gold bugs cite in defence of gold money is meaningless to me. Money does not need a substance to be money. The problem with modern money is not its lack of substance but its perfectly elastic supply. The privileged money producers create – for political reasons – ever more of it. That is the problem. And that is why the market remonetises gold. Nobody can produce it at will.
Here is Mr. Bernanke again:
“The U.S. government has a technology, called a printing press (or, today, its electronic equivalent), that allows it to produce as many U.S. dollars as it wishes at essentially no cost…We conclude that under a paper-money system, a determined government can always generate higher spending and hence positive inflation.”
But to Mr. Williams, paper dollars are now a safer bet than gold:
“Fears of a Greek default and eurozone turmoil are now prompting investors to buy US dollars – which many are starting to see as a safer bet than the euro or volatile gold.”
Hmmm. Safer? Are you sure?
What’s next for gold?
Mr. Williams may, of course, be right in predicting that the gold price may go down from here. For that to happen the faith in paper money has to be restored, at least to some degree. The printing presses have to stop and liquidation must be allowed to proceed. And that is precisely what happened under Fed chairman Volcker in 1979. That is what caused the previous correction in the gold “bubble”.
The question is this: How likely is this now?
In my view the present sell-off in gold is the result of the market going through another deflationary liquidation phase, yet at the same time the central bankers seem reluctant to throw more money at the problem. The ECB is buying unloved Italian sovereign bonds rather joylessly at present, and Bernanke seems for the time being happy to reorganize his bond portfolio rather than to print more money. Alas, I don’t think it will last. I am fairly confident it won’t last. They won’t have the stomach to sit tight.
Pressure is already building everywhere for more quantitative easing. Ironically, on the very same page of the FT, on which Mr. Williams argues that the gold bubble has burst, Harvard economist Kenneth Rogoff presents his case that this time is not so different, and that we can simply kick the can down the road once more by easing monetary policy, just as we have done for decades. In China, in Europe, everywhere, just print more money. And I already made ample references to Martin “Bring-out-the-bazooka” Wolf, who desperately urges the central banks to print more money.
Will the central bankers ignore these calls, as they should? I don’t think so. Remember, the dislocations are now astronomically larger than they were in 1979. The system is more leveraged and much more dependent on cheap credit. In the next proper liquidation, sovereign states and banks will default – no central banker will be able or willing to sit on his hands when that happens. But in order to postpone it (they won’t avoid it) they need to print ever more ever faster.
We are in a gold bull market for a reason, and a very good reason indeed. Unless the underlying fundamentals change (or policy changes fundamentally), I consider this sell-off in gold rather a buying opportunity.
Over at ConservativeHome, I have promoted Douglas Carswell’s ten minute rule Bill on legal tender laws and currency choice:
People today have unprecedented choice. They can shop around online. They can tune into numerous television and radio channels. They can even decide between different hospitals for medical treatment.
But why are people not allowed to decide for themselves in which currency to transact their business and store their own wealth?
Today, Douglas Carswell introduces a Bill designed to make a range of different currencies legal tender in the UK. It would mean that, with the click of a mouse, people would be able to store wealth and pay taxes in a range of different currencies of their choice.
The BBC are covering it here. Read the full article.
Banking theory remains one of the most heatedly debated areas of economics within Austrian circles, with two camps sitting opposite each other: full reservists and free bankers. The naming of the two groups may prove a bit misleading, since both sides support a free market in banking. The difference is that full reservists believe that either fractional reserve banking should be considered a form of fraud or that the perceived inherent instability of fiduciary expansion will force banks to maintain full reserves against their clients’ deposits. The term free banker usually refers to those who believe that a form of fractional reserve banking would be prevalent on the free market.
The case for free banking has been best laid out in George Selgin’s The Theory of Free Banking. It is a microeconomic theory of banking which suggests that fractional reserves will arise out of two different factors,
- Over time, “inside money” — banknotes (money substitute) — will replace “outside money” — the original commodity money — as the predominate form of currency in circulation. As the demand for outside money falls and the demand for inside money rises, banks will be given the opportunity to shed unnecessary reserves of commodity money. In other words, the less bank clients demand outside money, the less outside money a bank actually has to hold.
- A rise in the demand to hold inside money will lead to a reduction in the volume of banknotes in circulation, in turn leading to a reduction of the volume of banknotes returning to issuing banks. This gives the issuing banks an opportunity to issue more fiduciary media. Inversely, when the demand for money falls, banks must reduce the quantity of banknotes issued (by, for example, having a loan repaid and not reissuing that money substitute).
Free bankers have been quick to tout a number of supposed macroeconomic advantages of Selgin’s model of fractional reserve banking. One is greater economic growth, since free bankers suppose that a rise in the demand for money should be considered the same thing as an increase in real savings. Thus, within this framework, fractional reserve banking capitalizes on a greater amount of savings than would a full reserve banking system.
Another supposed advantage is that of monetary equilibrium. An increase in the demand for money, without an equal increase in the supply of money, will cause a general fall in prices. This deflation will lead to a reduction in productivity, as producers suffer from a mismatch between input and output prices. As Leland Yeager writes, “the rot can snowball”, as an increase in uncertainty leads to a greater increase in the demand for money. This can all be avoided if the supply of money rises in accordance with the demand for money (thus, why free-bankers and quasi-monetarists generally agree with a central bank policy which commits to some form of income targeting).
Monetary (dis)equilibrium theory is not new, nor does it originate with the free bankers. The concept finds its roots in the work of David Hume and was later developed in the United States during the first half of the 20th Century. The theory saw a more recent revival with the work of Leland Yeager, Axel Leijonhufvud, and Robert Clower. The integration of monetary disequilibrium theory with the microeconomic theory of free banking is an attempt at harmonizing the two bodies of theory. If a free banking system can meet the demand for money, then a central bank is unnecessary to maintain monetary stability.
The integration of the macro theory of monetary disequilibrium into the micro theory of free banking, however, should be considered more of a blemish than an accomplishment. It has unnecessarily drawn attention away from the merits of fractional reserve banking and instead muddled the free bankers’ case. Neither is it an accurate or useful macroeconomic theory of industrial misbalances or fluctuations.
The Nature of Price Fluctuations
The argument that deflation resulting from an increase in the demand for money can lead to a harmful reduction in industrial productivity is based on the concept of sticky prices. If all prices do not immediately adjust to changes in the demand for money then a mismatch between the prices of output and inputs goods may cause a dramatic reduction in profitability. This fall in profitability may, in turn, lead to the bankruptcy of relevant industries, potentially spiraling into a general industrial fluctuation. Since price stickiness is assumed to be an existing factor, monetary equilibrium is necessary to avoid necessitating a readjustment of individual prices.
Since price inflexibility plays such a central role in monetary disequilibrium, it is worth exploring the nature of this inflexibility — why are prices sticky? The more popular explanation blames stickiness on an entrepreneurial unwillingness to adjust prices. Those who are taking the hit rather suffer from a lower income later than now. Wage stickiness is also oftentimes blamed on the existence of long-term contracts, which prohibit downward wage adjustments.
Austrians can supply an alternative, or at least complimentary, explanation for price stickiness. If equilibrium is explained as the flawless convergence of every single action during a specific moment in time, Austrians recognize that an economy shrouded in uncertainty is never in equilibrium. Prices are set by businessmen looking to maximize profits by best estimating consumer demand. As such, individual prices are likely to move around, as consumer demand and entrepreneurial expectations change. This type of “inflexibility” is not only present during downward adjustments, but also during upward adjustments. It is “stickiness” inherent in a money-based market process beset by uncertainty.
It is true that government interventionism oftentimes makes prices more inflexible than they would be otherwise. Examples of this are wage floors (minimum wage), labor laws, and other legislation which makes redrawing labor contracts much more difficult. These types of labor laws handicap the employer’s ability to adjust his employees’ wages in the face of falling profit margins. Wages are not the only prices which suffer from government-induced inflexibility. It is not uncommon for government to fix the prices of goods and services on the market; the most well-known case is possibly the price fixing scheme which caused the 1973–74 oil crisis. There is a bevy of policies which can be enacted by government as a means of congesting the pricing process.
But, let us assume away government and instead focus on the type of price rigidity which exists on the market. That is, the flexibility of prices and the proximity of the actual price to the theoretical market clearing price is dependent on the entrepreneur. As long as we are dealing with a world of uncertainty and imperfect information, the pricing process too will be imperfect.
Price rigidity is not an issue only during monetary disequilibrium, however. In our dynamic market, where consumer preferences are constantly changing and re-arranging themselves, prices will have to fluctuate in accordance with these changes. Consumers may reduce demand for one product and raise demand for another, and these industries will have to change their prices accordingly: some prices will fall and others will rise. The ability for entrepreneurs to survive these price fluctuations depends on their ability to estimate consumer preferences for their products. It is all part of the coordination process which characterizes the market.
The point is that if price rigidity is “almost inherent in the very concept of money”, then why are price fluctuations potentially harmful in one case but not in the other? That is, why do entrepreneurs who face a reduction in demand originating from a change in preferences not suffer from the same consequences as those who face a reduction in demand resulting from an increase in the demand for money?
Price Discoordination and Entrepreneurship
In an effort to illustrate the problems of an excess demand for money, some have likened the problem to an oversupply of fiduciary media. The problem of an oversupply of money in the loanable funds market is that it leads to a reduction in the rate of interest without a corresponding increase in real savings. This leads to changes in the prices between goods of different orders, which send profit signals to entrepreneurs. The structure of production becomes more capital intensive, but without the necessary increase in the quantity of capital goods. This is the quintessential Austrian example of discoordination.
In a sense, an excess demand for money is the opposite problem. There is too little money circulating in the economy, leading to a general glut. Austrian monetary disequilibrium theorists have tried to frame it within the same context of discoordination. An increase in the demand for money leads to a withdrawal of that amount of money from circulation, forcing a downward adjustment of prices.
But there is an important difference between the two. In the first case, the oversupply of fiduciary media is largely exogenous to the individual money holders. In other words, the increase in the supply of money is a result of central policy (either by part of the central bank or of government). Theoretically, an oversupply of fiduciary media could also be caused by a bank in a completely free industry but it would still be artificial in the sense that it does not reflect any particular preference of the consumer. Instead, it represents a miscalculation by part of the central banker, bureaucrat, or bank manager. In fact, this is the reason behind the intertemporal discoordination — the changing profit signals do not reflect an underlying change in the “real” economy.
This is not the issue when regarding an excess demand for money. Here, consumers are purposefully holding on to money, preferring to increase their cash balances instead of making immediate purchases. The decision to hold money represents a preference. Thus, the decision to reduce effective demand also represents a preference. The fall in prices which may result from an increase in the demand for money all represent changes in preferences. Entrepreneurs will have to foresee or respond to these changes just like they do to any other. That some businessmen may miscalculate changes in preference is one thing, but there can be no accusation of price-induced discoordination.
The comparison between an insufficient supply of money and an oversupply of fiduciary media would only be valid if the reduction in the money supply was the product of central policy, or a credit contraction by part of the banking system which did not reflect a change in consumer preferences. But, in monetary disequilibrium theory this is not the case.
None of this, however, says anything about the consequences of deflation on industrial productivity. Will a rise in demand for money lead falling profit margins, in turn causing bankruptcies and a general period of economic decline?
Whether or not an industry survives a change in demands depends on the accuracy of entrepreneurial foresight. If an entrepreneur expects a fall in demand for the relevant product, then investment into the production of that product will fall. A fall in investment for this product will lead to a fall in demand for the capital goods necessary to produce it, and of all the capital goods which make up the production processes of this particular industry. This will cause a decline in the prices of the relevant capital goods, meaning that a fall in the price of the consumer good usually follows a fall in the price of the precedent capital goods. Thus, entrepreneurs who correctly predict changes in preference will be able to avoid the worst part of a fall in demand.
Even if a rise in the demand for money does not lead to the catastrophic consequences envisioned by some monetary disequilibrium theorists, can an injection of fiduciary media make possible the complete avoidance of these price adjustments? This is, after all, the idea behind monetary growth in response to an increase in demand for money. Theoretically, maintaining monetary equilibrium will lead to a stabilization of the price level.
This view, however, is the result of an overly aggregated analysis of prices. It ignores the microeconomic price movements which will occur with or without further monetary injections. Money is a medium of exchange, and as a result it targets specific goods. An increase in the demand for money will withdraw currency from this bidding process of the present, reducing the prices of the goods which it would have otherwise been bid against. Newly injected fiduciary media, maintaining monetary equilibrium, is being granted to completely different individuals (through the loanable funds market). This means that the businesses originally affected by an increase in the demand for money will still suffer from falling prices, while other businesses may see a rise in the price of their goods. It is only in a superfluous sense that there is “price stability”, because individual prices are still undergoing the changes they would have otherwise gone.
So, even if the price movements caused by changes in the demand for money were disruptive — and we have established that they are not — the fact remains that monetary injections in response to these changes in demand are insufficient for the maintenance of price stability.
Implications for Free Banking
To a very limited degree, free banking theory does rely on some aspects of monetary disequilibrium. The ability to extend fiduciary media depends on the volume of returning liabilities; a rise in the demand for money will give banks the opportunity to increase the supply of banknotes. However, the complete integration of monetary disequilibrium theory does not represent theoretical advancement — if anything, it has confused the free bankers’ position and unnecessarily contributed to the ongoing theoretical debate between full reservists (many of which reject the supposed macroeconomic benefits of free banking) and free bankers.
We know that an increase in the demand for money will not lead to industrial fluctuations, nor does it produce any type of price discoordination. Like any other movement in demand, it reflects the preferences of the consumers which drive the economy. We also know that monetary injections cannot achieve price stability in any relevant sense. Thus, the relevancy of the macroeconomic theory of monetary disequilibrium is brought into question. Free banking theory would be better off without it.
This suggests, though, that a rejection of monetary disequilibrium is not the same as a rejection of fractional reserve banking. It could be the case that a free banking industry capitalizes on an increase in savings much more efficiently than a full reserve banking system. Or, it could be that the macroeconomic benefits of fractional reserve banking are completely different from those already theorized, or even that there are no macroeconomic benefits at all — it may purely be a microeconomic theory of the banking firm and industry. These aspects of free banking are still up for debate.
George A. Selgin, The Theory of Free Banking: Money Supply under Competitive Note Issue
(Totowa, New Jersey: Rowman & Littlefield, 1988). Also see George A. Selgin, Bank Deregulation and Monetary Order
(Oxon, United Kingdom: Routledge, 1996); Larry J. Sechrest, Free Banking: Theory, History, and a Laissez-Faire Model
(Auburn, Alabama: Ludwig von Mises Institute, 2008); Lawrence H. White, Competition and Currency
(New York City: New York University Press, 1989).
Leland B. Yeager, The Fluttering Veil: Essays on Monetary Disequilibrium (Indianapolis, Indiana: Liberty Fund, 1997), pp. 218–219.
Ibid., p. 218.
Clark Warburton, “Monetary Disequilibrium Theory in the First Half of the Twentieth Century,” History of Political Economy 13, 2 (1981); Clark Warburton, “The Monetary Disequilibrium Hypothesis,” American Journal of Economics and Sociology 10, 1 (1950).
Peter Howitt (ed.), et. al., Money, Markets and Method: Essays in Honour of Robert W. Clower (United Kingdom: Edward Elgar Publishing, 1999).
Steven Horwitz, Microfoundations and Macroeconomics: An Austrian Perspective (United Kingdom: Routledge, 2000).
Some of the criticisms presented here have already been laid out in a forthcoming journal article: Phillip Bagus and David Howden, “Monetary Equilibrium and Price Stickiness: Causes, Consequences, and Remedies,” Review of Austrian Economics. I do not support all of Bagus’ and Howden’s criticisms, nor do I share their general disagreement with free banking theory.
Yeager 1997, pp. 222–223.
Laurence Ball and N. Gregory Mankiw, “A Sticky-Price Manifesto,” NBER Working Paper Series 4677, 1994, pp. 16–17.
Horwitz 2000, pp. 12–13.
Yeager 1997, p. 104.
Yeager 1997, p. 223. Yeager quotes G. Poulett Scrope’s Principles of Political Economy, “A general glut — that is, a general fall in the prices of the mass of commodities below their producing cost — is tantamount to a rise in the general exchangeable value of money; and is proof, not of an excessive supply of goods, but of a deficient supply of money, against which the goods have to be exchange.”
Joseph T. Salerno, Money: Sound & Unsound (Auburn, Alabama: Ludwig von Mises Institute, 2010), pp. 193–196.
This is Menger’s theory of imputation; Carl Menger, Principles of Economics (Auburn, Alabama: Ludwig von Mises Institute, 2007), pp. 149–152.
The advent of a European Union with a single currency was hailed by Nobel laureate Robert A. Mundell as
a great step forward, because it will bring forth new and for once meaningful ideas about reform of the international financial architecture. The euro promises to be a catalyst for international monetary reform.
Unfortunately, by underwriting member countries’ financial risk with impunity, the EU made the euro the catalyst of unsound monetary policies that are now leading the Union into an economic maelstrom. Still less has euroization set the pace for an international reform of the monetary machinery.
Milton Friedman was more cautious about this super currency. The euro’s real Achilles heel, he said, would prove to be political: a system under different governments subject to very different political pressures could not endure a common currency. Without political integration, the tension and friction of the national institutional systems would condemn it to failure. Indeed the problem is always political, but not in the sense meant by Friedman. Regardless of the degree of political integration, governments’ spending and dissipation, which are unalterable tendencies of any political organism, make irredeemable paper monies act as transmission belts for financial and economic disruptions.
At the outset of the 2010 Greek crisis that triggered the European domino effect, Professor Mundell pointed out that there was nothing inherently bad in the euro as such; the problem lay in the country’s public debt spiralling out of control. However, being a strong advocate of financial and monetary “architectures” as means for pursuing monetary and non monetary ends, he should have acknowledged that irredeemable currencies cannot be examined independently of the political framework in which they are embedded, because politics inevitably extends the role of money beyond its original function as a medium of exchange, making it trespass into the field of “economic policies”. Official theory does not see money as a neutral device, but as an instrument of policy for purposing ends which conflict with its primary goal: to retain the value of the monetary unit. Failing this primary goal, monetary architectures no matter how they are framed make for economic devastation.
Is a monetary order a constitution?
Robert Mundell’s assertion — that a monetary order is to a monetary system what a constitution is to a political or electoral system — unintentionally sheds light on the authoritarian nature of today’s irredeemable money governance and its fraudulent practices. Where is the flaw in this catching parallel? The flaw is that if a monetary order stands for a constitution, i.e. a democratic framework of laws and regulations, it does not provide for a separation of powers. The money order as a legislative system coincides with the executive power, the money system, embracing the range of practices and policies pursued for monetary and non monetary ends. Moreover, the judicial system is concentrated in the same hands because monetary authorities are not accountable to anyone. With the exception of governments that, being their source of power, may be considered “ghost writers” of monetary constitutions, no one else can influence money legislation. In other words, in this framework there is no room for an “electorate” (producers and consumers). What basic law regulates the relationships between individuals and their monetary orders by guaranteeing fundamental rights?
The basic law of the gold standard was the free convertibility of currencies, allowing individuals to redeem paper into gold on demand. This acted as protection against money misuse by banks or governments. Convertibility was for producers and consumers the means to express their vote on the degree of money reliability. Money was an instrument of saving. Paper money, or fiduciary circulation, as opposed to the banknote which was a title of credit, could be issued too but it had to comply with the law of gold circulation whereby the total volume of currency, coins, banknotes and paper notes had to correspond exactly to the quantity of metallic money necessary to allow economic exchanges. This being the purpose of the monetary system, banks, to avoid insolvency risks and gold reserves outflow had to adjust, through the discount rate policy, the issue of fiduciary papers to the real needs of economy.
Therefore gold was a barrier against the use of money as an instrument of power. Its value was fixed on the world market where gold was always in demand. Gold then had the same value in each country, and that’s why it was a stable international means of payment. Finally exchange rates were stable, not because they were arbitrarily controlled by government but because they were always gravitating towards the gold parity, the rate at which currencies were exchanged, and this was determined by their respective gold content. Roughly stated, the gold standard was an order providing for a separation of powers: the fundamental law of metallic circulation was the constitution, the banking system was the executive power which was controlled by producers and consumers representing the judicial power to sanction abuses. In today’s monetary order, producers and consumers have become legally disarmed victims of monetary legislation, deprived of the weapon of defence against money manipulation.
A trail of broken monetary arrangements
Central banks are the pillars of the monetary order because they are the source of the world’s supply and they regulate it. They control circulation, expand and restrict credit, stabilize prices and, above all they act as instruments of State finances. They are both “anvil and hammer”, trying to adapt the system to all occasions, remedying abuse by a still greater abuse, and considering themselves immune from the consequences. For these reasons, the leading architects of the world financial order should explain how the very same institutions that are pursuing unsound policies domestically might realistically think to establish a sound monetary order at a higher level.
The most important architecture, after the Second World War, was the Bretton Woods Agreement (1944). A “managed” form of gold standard was designed to give a stable word monetary order by a fixed exchange rate system with the dollar as the key currency, redeemable in gold only to central banks, and with member countries’ currencies pegged to the dollar at fixed rates and indirectly redeemable in gold. But the huge balance of payments deficit and high inflation in the US should have made gold rise against the dollar. A crisis of confidence in the reserve currency pushed foreign countries’ central banks to demand conversion into the metal, panicking the US, which promptly closed the gold window.
After the end of Bretton Woods in 1971, fiat currencies began to rule the world
The Smithsonian Agreement (1971-1973) which followed Bretton Woods was hailed by President Nixon as the “greatest monetary agreement in the history of the world” (!). It was an order based on fixed exchange rates fluctuating within a narrow margin, but without the backing of gold. Again, countries were expected to buy an irredeemable dollar at an overvalued rate. The system ended after two years.
Within the West European block, fixed exchange rates remained, but floating within a band against dollar. “The snake”, as the arrangement was called, died in 1976. Major shocks, including the 1973 oil price spike and the 1974 commodity boom, caused a series of currency crises, repeatedly forcing countries out of the arrangement. But this did not persuade them to abandon the dream of intra-European currency stability. The snake was replaced in 1979 by the European Monetary System (EMS) with its basket of arbitrarily fixed-but-adjustable exchange rates floating within a small band and pegged to the European currency unit (ecu), the precursor of euro.
Yet the arrangement ultimately failed, again due to a series of severe shocks (a global recession, German economic and monetary unification, liberalization of capital controls). Rather than abandoning the system, European governments adopted much wider fluctuation bands, reaffirming their commitment to an order based on fixed rates and irredeemable currencies.
So we arrive at an arrangement without historical precedent: sovereign nations with a single legal tender issued by a common central bank — the Euro entered into function in 1999. The stated aims of the single currency were noble: to facilitate trade and freedom of movement, providing for a market large enough to give each country better insulation against external shocks. Unfortunately, it couldn’t provide the member states with a system of defence against the internal shocks inherent to any irredeemable currency.
History confirms that currencies cannot rest on stability pacts and similar restrictions. Mainstream economists have long argued that the euro crisis has arisen from the lack of common social and fiscal policies, but the architects of the European currency were also well aware of this. Their aim was always to forge a European people, and a socialist European superstate, on the back of the euro. They had not the patience for a European state to naturally emerge in the same manner as the US, from a population sharing the same culture and language. “Europe of the people and for the people” was a misleading disguise to give an economic prison the honest appearance of a democratic and liberal project.
So in the end, Euroland has been working as a hybrid framework of institutions to which members countries delegate some of their national sovereignty in exchange for access to a larger market, capital, and low interest rates. But they entered an region of anti-competitive practices, antitrust regulations, redistributive policies, lavish subsidies, and faux egalitarianism — a perfect economic environment in which to run astronomical debts. The euro system represents one of the most significant attempts to place a currency at the service of political and social objectives, and it is for this reason that it will remain a source of problems.
Descent into the maelstrom
Today’s monetary orders are shaped to pursue what Rothbard has called the economics of violent intervention in the market. They are, in essence, based on a socialist idea of money. Accordingly they make for a framework of laws, conventions, and regulations fitting the interests of the rulers. In this framework it is the public debt that has utmost importance. Indeed, it is the monetary order that has developed the concept of debt in the modern sense. Because the management of public finances have become the supreme direction of economic policy, treasuries are now in charge of the national economies. Through the incestuous relationships they maintain with central banks, they are able to create the ideal monetary conditions in which to borrow ad libitum. Purely monetary considerations such as providing a stable currency are disregarded. Political, fiscal and social ends prevail over everything else.
In the last eighty years, monetary orders have allowed an abnormal increase in finance, banking, debt, and speculation completely unrelated to a development of sound economic activity and wealth production, but due instead to government’s overloading of central banking responsibilities. Unfortunately, the outcome is the de facto insolvency of the world monetary order. The ensuing adverse effects may be still delayed with the aid of various measures, interventions and expedients, but not much time is left. The stormy sea of debt has already produced a whirlpool that will be sucking major economies into a maelstrom, and the larger they are, the more rapid will be their descent. It remain to be seen what shape they will take after the shipwreck. Gloomy as the situation may appear, it is not hopeless because such an event might be the sole opportunity to cause a decisive change. It might be that instead of resuming the art of monetary expedients to create irredeemable money, governments and banks let them fade into oblivion. This would allow people to take their monetary destiny back into their own hands.
A view from America, previously published at Forbes.com on August 15th
Is it possible that the ghastly unemployment, stagnant growth (and possible double-dip recession), and financial market convulsions all can be traced back to one single decision? Perhaps.
Monetary policy is the most recondite yet most pervasive and powerful of economic forces. Keynes, in The Economic Consequences of the Peace, wrote, “There is no subtler, no surer means of overturning the existing basis of society than to debauch the currency. The process engages all the hidden forces of economic law on the side of destruction, and does it in a manner which not one man in a million is able to diagnose.”
The converse also is true. Restoring real monetary integrity engages all the hidden forces of economic law on the side of prosperity. And forces for monetary reform are very much in motion.
The dollar has fallen in value by more than 80% from the day when Richard Nixon took the world off the tattered remnants of the gold standard. Aug. 15 marks the 40th anniversary of the avowedly “temporary” abandonment of the gold standard by President Richard Nixon.
“Closing the gold window” was part of a series of dramatic but shocking and destructive tactics by Washington, including wage-price controls, a tariff barrier, and other measures, all leading to economic and financial markets hell. All such measures save one stand discredited. The only piece of the Nixon Shock still in force was the piece most ostentatiously designated as temporary. Nixon: “I have directed Secretary Connally to suspend temporarily the convertibility of the dollar into gold….”
Suspending convertibility was no trivial matter. Nixon speechwriter William Safire recalled: “On the helicopter headed for Camp David, I was seated between [Herb] Stein and a Treasury official. When the Treasury man asked me what was up, I said it struck me as no big deal, that we would probably close the gold window. He leaned forward, put his face in his hands, and whispered, ‘My God!’ Watching this reaction, it occurred to me that this could be a bigger deal than I thought….”
It proved to be a very big deal. How ironic that the most staunch defenders of a pure paper standard, the sole remnant of Nixonomics, are a few influential “progressives” such as Paul Krugman, Joseph Stiglitz and Thomas Frank. Call them “the Nixonians.” The poor jobs growth and stagnation of today’s “world dollar standard” are not, unsurprisingly, dissimilar to the results of the Nixon Shock.
There is ample evidence that restoring gold convertibility would put the world back on the path of jobs, growth, and a balanced federal budget. Politicians do not like messing around with monetary policy. But gold, recently rediscovered by the Tea Party, has an impressive technical, economic, and political pedigree. Gold convertibility has a very well established track record of job-creation, properly applied, during many eras.
The silver lining to the whipsawing Dow is that it makes politicians open to new ideas, even new old ideas. Monetary statesmen from Alexander Hamilton forward have faced circumstances far more dire than those of today and turned things around. Modern example? The German economic miracle, the Wirtschaftswunder.
That miracle was founded in currency reform. On the very day when Ludwig Erhard’s currency reform was put into place, the economic paralysis ended. The “rightest” economist of the 20th century, Jacques Rueff, wrote (with André Piettre) about the turnaround beginning on the very day of the reform:
Shop windows were full of goods; factory chimneys were smoking and the streets swarmed with lorries. Everywhere the noise of new buildings going up replaced the deathly silence of the ruins. If the state of recovery was a surprise, its swiftness was even more so. In all sectors of economic life it began as the clocks struck on the day of currency reform. Only an eye-witness can give an account of the sudden effect which currency reform had on the size of stocks and the wealth of goods on display. Shops filled with goods from one day to the next; the factories began to work. On the eve of currency reform the Germans were aimlessly wandering about their towns in search of a few additional items of food. A day later they thought of nothing but producing them. One day apathy was mirrored in their faces while on the next a whole nation looked hopefully into the future.
Rueff took a similar approach, including a dramatic currency reform, to reviving the French economy. As economist and Lehrman Institute senior advisor John Mueller summarizes:
Despite the unanimous opposition of his cabinet, de Gaulle adopted the entire Rueff plan, which required sweeping measures to balance the budget and make the franc convertible after 17.5% devaluation – though not without qualms. ‘All your recommendations are excellent,’ de Gaulle told Rueff. ‘But if I apply them all and nothing happens, have you considered how much real pain it will cause across this country?’ Rueff replied, “I give you my word, mon General, that the plan, if completely adopted, will re-establish equilibrium in our balance of payments within a few weeks. Of this I am absolutely sure; I accept that your opinion of me will depend entirely on the result.’ (It did: ten years later, de Gaulle awarded Rueff the medal of the Legion of Honor.)
Today, on this the 40th anniversary of the closing of the gold window, a group of Americans issued a statement reading, in its conclusion:
[W]e support a 21st century international gold standard. America should lead by unilateral resumption of the gold standard. The U.S. dollar should be defined by law as convertible into a weight unit of gold, and Americans should be free to use gold itself as money without restriction or taxation. The U.S. should make an official proposal at an international monetary conference that major nations should use gold rather than the dollar or other national currencies to settle payments imbalances between one another. A new international monetary system, based on gold, without official reserve currencies, should emerge from the deliberations of the conference.
Many of the signatories are associated with the American Principles Project, chaired by Sean Fieler, and the Lehrman Institute (with both of which this writer is professionally associated), chaired by Lewis E. Lehrman. Signatories also include such important thought leaders as Atlas Foundation’s Dr. Judy Shelton and Forbes Opinions editor John Tamny.
Politicians may have forgotten the power that real money, such as currency convertible into gold, has to reverse an economic crisis. But the people have not. Earlier this year, the government of Utah restored, to international attention, the recognition of gold and silver coins as legal money. Now news emerges that the largest and most respected political party in Switzerland is supporting the work of the Goldfranc Association, led by citizen Thomas Jacob, to introduce a gold-convertible Swiss franc as a parallel currency.
Proponents are using the Swiss political process to put the creation of a gold franc in the Swiss Constitution. Jacob finds himself in the very distinguished company of Rueff and Erhard.
While London burns Switzerland thrusts gold-based currency reform toward the center of the international debate on how to rescue the euro, end the debt crisis, and turbocharge economic growth and job creation with integrity, not Nixonian manipulation.
Will a world Wirtschaftswunder — an economic miracle — follow a restoration of gold convertibility? History shows how practical such a miracle can be.
It is with no feelings of joy that we republish this article, first posted on 8 February 2010
Guest contributor Anita Acavalos, daughter of Advisory Board member Andreas Acavalos, explains the political and economic predicament in Greece.
In recent years, Greece has found itself at the centre of international news and public debate, albeit for reasons that are hardly worth bragging about. Soaring budget deficits coupled with the unreliable statistics provided by the government mean there is no financial newspaper out there without at least one piece on Greece’s fiscal profligacy.
Although at first glance the situation Greece faces may seem as simply the result of gross incompetence on behalf of the government, a closer assessment of the country’s social structure and people’s deep-rooted political beliefs will show that this outcome could not have been avoided even if more skill was involved in the country’s economic and financial management.
The population has a deep-rooted suspicion of and disrespect for business and private initiative and there is a widespread belief that “big money” is earned by exploitation of the poor or underhand dealings and reflects no display of virtue or merit. Thus people feel that they are entitled to manipulate the system in a way that enables them to use the wealth of others as it is a widely held belief that there is nothing immoral about milking the rich. In fact, the money the rich seem to have access to is the cause of much discontent among people of all social backgrounds, from farmers to students. The reason for this is that the government for decades has run continuous campaigns promising people that it has not only the will but also the ABILITY to solve their problems and has established a system of patronages and hand-outs to this end.
Anything can be done in Greece provided someone has political connections, from securing a job to navigating the complexities of the Greek bureaucracy. The government routinely promises handouts to farmers after harsh winters and free education to all; every time there is a display of discontent they rush to appease the people by offering them more “solutions.” What they neglect to say is that these solutions cost money. Now that the money has run out, nobody can reason with an angry mob.
A closer examination of Greek universities can be used as a good illustration of why and HOW the government has driven itself to a crossroad where running the country into even deeper debt is the only politically feasible path to follow. University education is free. However, classroom attendance is appalling and there are students in their late twenties that still have not passed classes they attended in their first year. Moreover, these universities are almost entirely run by party-political youth groups which, like the country’s politicians, claim to have solutions to all problems affecting students. To make matters worse, these groups often include a minority of opportunists who are not interested in academia at all but are simply there to use universities as political platforms, usually ones promoting views against the wealthy and the capitalist system as a whole even though they have no intellectual background or understanding of the capitalist structure.
This problem is exacerbated by the fact that there is no genuine free market opposition. In Greece, right wing political parties also favour statist solutions but theirs are criticised as favouring big business. The mere idea that the government should be reduced in size and not try to have its hand in everything is completely inconceivable for Greek politicians of all parties. The government promises their people a better life in exchange for votes so when it fails to deliver, the people naturally think they have the right or even the obligation to start riots to ‘punish’ them for failing to do what they have promised.
Moreover, looking at election results it is not hard to observe that certain regions are “green” supporting PASOK and others “blue” supporting Nea Dimokratia. Those regions consistently support certain political parties in every election due to the widespread system of patronages that has been created. By supporting PASOK in years where Nea Dimokratia wins you can collect on your support when inevitably after a few election periods PASOK will be elected and vice versa. Not only are there widely established regional patronage networks but there are strong political families that use their clout to promise support and benefits to friends in exchange for their support in election years.
Moreover, in line with conventional political theory on patronage networks, in regions that are liable to sway either way politicians have a built in incentive to promise the constituents more than everyone else. The result is almost like a race for the person able to promise more, and thus the system seems by its very nature to weed out politicians that tell people the honest and unpalatable truth or disapprove of handouts. This has led people to think that if they are in a miserable situation it is because the government is not trying hard enough to satisfy their needs or is favouring someone else instead of them. When the farmers protest it is not just because they want more money, it is because they are convinced (sometimes even rightly so) that the reason why they are being denied handouts is that they have been given to someone else instead. It is the combination, therefore, of endless government pandering and patronages that has led to the population’s irresponsible attitude towards money and public finance. They believe that the government having the power to legislate need not be prudent, and when the government says it needs to cut back, they point to the rich and expect the government to tax them more heavily or blame the capitalist system for their woes.
After a meeting in Brussels, current Prime Minister George Papandreou said:
Salaried workers will not pay for this situation: we will not proceed with wage freezes or cuts. We did not come to power to tear down the social state.
It is not out of the kindness of his heart that he initially did not want to impose a pay freeze. It was because doing so would mean that the country may never escape the ensuing state of chaos and anarchy that would inevitably occur. Eventually he did come to the realisation that in the absence of pay freezes he would have to plunge the country into even further debt and increase taxes and had to impose it anyway causing much discontent. Does it not seem silly that he is still trying to persuade the people that they will not pay for this situation when the enormous debts that will inevitably ensue will mean that taxes will have to increase in perpetuity until even our children’s children will be paying for this? This minor glitch does not matter, though, because nobody can reason with a mob that is fighting for handouts they believe are rightfully theirs.
Greece is the perfect example of a country where the government attempted to create a utopia in which it serves as the all-providing overlord offering people amazing job prospects, free health care and education, personal security and public order, and has failed miserably to provide on any of these. In the place of this promised utopian mansion lies a small shack built at an exorbitant cost to the taxpayer, leaking from every nook and cranny due to insufficient funds, which demands ever higher maintenance costs just to keep it from collapsing altogether. The architects of this shack, in a desperate attempt to repair what is left are borrowing all the money they can from their neighbours, even at exorbitant costs promising that this time they will be prudent. All that is left for the people living inside this leaking shack is to protest for all the promises that the government failed to fulfil; but, sadly for the government, promises will neither pay its debts nor appease the angry mob any longer. Greece has lost any credibility it had within the EU as it has achieved notoriety for the way government accountants seem to be cooking up numbers they present to EU officials.
Dismal as the situation may appear, there still is hope. The Greeks many times have shown that it is in the face of dire need that they tend to bond together as a society and rise to the occasion. Family ties and social cohesion are still strong and have cushioned people from the problems caused by government profligacy. For years, the appalling situation in schools has led families to make huge sacrifices in order to raise money for their children’s private tuition or send them to universities abroad whenever possible. This is why foreign universities, especially in the UK, are full of very prominent and hard working Greek students. Moreover, private (as opposed to public) levels of indebtedness, although on the rise, are still lower than many other European countries.
However, although societal bonding and private prudence will help people deal with the consequences of the current crisis, its resolution will only come about if Greek people learn to listen to the ugly truths that sometimes have to be said. They need to be able to listen to statesmen that are being honest with them instead of politicians trying to appease them in a desperate plea to get votes. The time for radical, painful, wrenching reform is NOW.
There are no magic wands, no bail-outs, no quick and easy fixes. The choice is between doing what it takes to put our house in order ourselves, or watching it collapse around us. This can only come about if Prime Minister George Papandreou uses the guts he has displayed in the past when his political stature and authority had been challenged and channels them towards making the changes the country so desperately needs. Only if he emerges as a truly inspired statesman who will choose the difficult as opposed to the populist solution will Greece be up again and on a path towards prosperity. He needs to display a willingness to clean up the mess made after years of bad government and get society to a point where they are willing to accept hard economic truths. One can only hope…
From Paper Money Collapse, 23 May 2011.
The widely-read Lex column in today’s Financial Times ran an article on gold ETFs (exchange-traded funds) that regurgitates a couple of assumptions on gold that are popular in the mainstream media and financial market circles. They are: 1) gold must be in a bubble and 2) this bubble must soon pop.
As Lex put it:
“Predicting the top of the gold bubble is foolhardy. It is safer to predict that the bubble’s popping will be especially nasty.”
In order for gold to be in a bubble I would suggest that two conditions have to be met: First, some erroneous but popular belief as to the merit of and ongoing demand for this asset has to capture the general public. (“On the national level, house prices in the United States never go down.” “All dot.com stocks will have market caps of billions of dollars.” “Those tulips will always be in demand.” “Governments can and will always pay debt in their own currency.”) Second, the bubble has to be inflated with easy money. I would even argue that this second condition is the more important one. If you pump enough new money into the economy and provide enough cheap credit, some irrational belief will soon emerge and bubbles will get inflated.
There is obviously plenty of easy money around – and it is certainly the reason for the gold rally, but not in the way that cheap credit created the housing bubble or stock market bubble. Gold is not rallying because it is so easy to buy gold on credit and because banks are falling over one-another to put it on their balance sheets or use it as collateral for their fractional-reserve lending business. Indeed, a key message of the Lex article seems to be that gold ETFs are – contrary to some reports – indeed solidly gold-backed and thus pretty much as good as direct bullion investment. This means they are not over-inflated derivative structures around some small core gold holding, or in any other form the result of financial trickery. Whether this is indeed the case or not is a different topic. I do not want to comment on it here other than to point out that I personally still prefer direct investment in physical gold. Be that as it may, gold appears not to be rallying in response to financial leverage in the gold market, and that is an important difference to all other recent bubbles (such as real estate in the U.S. pre 2007, in Japan pre 1989, or in China today).
What I do find interesting, however, is that the Lex-writer uses the ETF story to argue that gold must still be in a bubble. The rationale seems to be as follows: ETFs have lowered the barriers to entry for gold investing. ETFs constitute a fairly low cost, liquid, and easily accessible way to bet on a rising gold price and thus have drawn a new set of investors to this precious metal. The new demand caused the price to rise, and the rising price has continued to attract ever more buyers. The rally is now feeding on itself. The latter point is not dissimilar to the one used by Warren Buffett to dismiss the gold rally when he
“…tells shareholders that he understands why rising prices can create excitement and draw in buyers, but it’s not the way to create lasting wealth.” (CNBC)
So according to this narrative, gold is not rising because of financial leverage but because of a fashion for ETFs. That fashion shows signs of petering out as – according to Lex – evidenced by the data from the World Gold Council that shows outflows from these instruments.
Yeah? So what?
According to the World Gold Council, in Q1 of 2011 outflows from ETFs and similar products totalled 56 tonnes.
At the same time, inflows into bullion and coins totalled —366.4 tonnes. That is a 52% year-on-year increase in physical demand and almost a doubling of demand if measured in rapidly declining paper dollars.
Continue reading at Paper Money Collapse
The main problem with having discussions about economics and financial markets is this: People look at these complex phenomena through entirely different prisms; they use vastly dissimilar – even contrasting – narratives as to what has happened, what is going on now, and what is therefore likely to happen in the future. Citing any so-called “facts” – statistical data, or the actions and statements of policymakers – in support of a specific interpretation and forecast is often a futile exercise: The same data point will be interpreted very differently if some other intellectual framework is being applied to it.
Blue pill or red pill?
There is what I call the mainstream view, the comforting view. That is the world in which the majority of commentators and almost all policymakers live. If you want to be part of this world, you have to take the blue pill.
In the words of Morpheus: “You wake up in your bed and you believe what you want to believe.”
Or, if you don’t want to take the blue pill, you can simply continue reading the main newspapers and watch CNBC – it’s the same thing. The perspective from inside the Matrix is this: We are facing cyclical challenges. The economy is an organism, and it is presently not performing to its full potential. It is still weakened by a terrible disease (financial crisis), but luckily it is now recovering. But because the disease was so severe, the recovery is slow. Thankfully, the doctors – the governments and central banks – have learned from Dr. Keynes and Dr. Friedman and are providing ample stimulus to aid this recovery. The medicine is applied in such strong doses that many observers are afraid the treatment itself could cause damage to the patient. There is, however, no alternative to such drastic medication, and we have to trust that, as the recovery proceeds, the medication will carefully be reduced.
This is the comforting narrative. Comforting, because it’s the cyclical view, which simply means, “we have been here before.” It also contains, at its core, a naïve view on money: injecting money into the economy has only two effects: it boosts growth (that is positive) and it lifts prices (that is sometimes positive, sometimes negative). No other effects of money-injections have to trouble us.
Alternatively…..you may take the red pill, and “I will show you how deep the rabbit hole goes.”
The economy is in reality not some organism or a machine that has a definitive performance potential. The acting parts are not some neat, statistically observable aggregates – but individuals, or groups of individuals who form households or companies. All these actors have their own personal aims and goals, and they all use the decentralized market economy to realize their plans as well as they can. For those stepping outside the Matrix, with its comforting idea that everybody wants higher GDP and that when GDP is higher, regardless of how this was achieved, everybody will be happy – this appears scarily chaotic: No unifying objectives but a multitude of separate and often conflicting wishes and plans. Yet, on closer inspection, it is not chaos, as the actors can use market prices to plan their actions rationally and coordinate them.
Market prices are essential for this extended and decentralized division of labor to work. But sadly, market prices are constantly being distorted.
Most importantly, the constant injection of new money in today’s system of fully flexible paper money tends to depress interest rates and fool the market participants into believing that more voluntary savings are available than really are, and that resource allocations and asset prices are therefore justified that correspond with a very low time preference (=high propensity to save) by the public. These distortions have been going on almost continuously for the past 4 decades but in particular over the past 20 years.
The result of such distorted market signals is the accumulation, over time, of a tremendous cluster of errors, visible in the form of unsustainable asset prices, excess levels of debt, and an under-collateralized pile of inflated paper assets.
For those outside the Matrix, the red-pill-crowd, there is only one solution: The printing of money and artificial lowering of interest rates has to stop. This allows the coordination of decentralized individual plans to make again use of correct market prices (importantly, that includes interest rates). The result will be the dissolution of the accumulated misallocations of resources and mis-pricings of assets – this is going to be painful for a while but necessary for markets to function properly again.
Those inside the Matrix can’t see it that way. For them, the recession is not the collective realization that a cluster of errors has piled up, and the drastic revision of a multitude of individual plans in response to this realization, but simply a drop in aggregate activity of the economic organism. This requires more money injections. More stimulus! More medication! Depressing interest rates further is an important part of the treatment.
The red-pill crowd knows that this will not work. It will slow the correction of past mistakes – which, ironically, the blue pill crowd will interpret as a sign of stability – and encourage new activities on the basis of wrong price signals, which must ultimately lead to an even bigger cluster of errors – but this activity will be interpreted by the blue-pill crowd as the green shoots of recovery.
With dislocations piling up, the creation of artificial growth becomes ever more difficult.
The red-pillers view money creation differently from the blue-pillers. The effects of money printing are not just higher growth and higher inflation but, much more importantly, the distortion of relative prices and, consequently, the misallocation of resources.
The present crisis is not a cyclical phenomenon – as the blue-pillers believe – it is a systemic problem. It is the process by which the paper money system approaches its endgame. The blue-pillers are in charge of the printing press and the government. They cannot but continue printing money.
Continue reading at Paper Money Collapse
The following testimony was delivered before the House of Representatives Subcommittee on Domestic Monetary Policy and Technology, chaired by Congressman Ron Paul (R-Texas), on “Monetary Policy and the Debt Ceiling: Examining the Relationship between the Federal Reserve and Government Debt,” in Washington, D.C. on May 11, 2011. It was previously published on Northwood University’s blog In Defense of Capitalism & Human Progress
“I place economy among the first and most important virtues, and public debt as the greatest of dangers to be feared . . . To preserve our independence, we must not let our rulers load us with public debt . . . we must make our choice between economy and liberty or confusion and servitude . . . If we run into such debts, we must be taxed in our meat and drink, in our necessities and comforts, in our labor and in our amusements . . . If we can prevent the government from wasting the labor of the people, under the pretense of caring for them, they will be happy.”
Government Debt and Deficits
The current economic crisis through which the United States is passing has given a heightened awareness to the country’s national debt. After a declining trend in the 1990s, the national debt has dramatically increased from $5.7 trillion in January 2001 to $10.7 trillion at the end of 2008, to over $14.3 trillion through April of 2011. The debt has reached 98 percent of 2010 U.S. Gross Domestic Product.
The approximately $3.6 trillion that has been added to the national debt since the end of 2008 is more than double the market value of all private sector manufacturing in 2009 ($1.56 trillion), more than three times the market value of spending on professional, scientific, and technical services in 2009 ($1.07 trillion), and nearly five times the amount spent on non-durable goods in 2009 ($722 billion). Just the interest paid on the government’s debt over the first six months of the current fiscal (October 2010-April 2011), nearly $245 billion, is equal to more than 40 percent of the total market value of all private sector construction spending in 2009 ($578 billion)
This highlights the social cost of deficit spending, and the resulting addition to the national debt. Every dollar borrowed by the United States government, and the real resources that dollar represents in the market place, is a dollar of real resources not available for use in private sector investment, capital formation, consumer spending, and therefore increases and improvements in the quality and standard of living of the American people.
In this sense, the government’s deficit spending that cumulatively has been increasing the national debt has made the United States that much poorer than it otherwise could have and would have been, if the dollar value of these real resources had not been siphoned off and out of use in the productive private sectors of the American economy.
What has made this less visible and less obvious to the American citizenry is precisely because it has been financed through government borrowing rather than government taxation. Deficit spending easily creates the illusion that something can be had for nothing. The government borrows “today” and can provide “benefits” to various groups in the society in the present with the appearance of no immediate “cost” or “burden” upon the citizenry.
Yet, whether acquired by taxing or borrowing, the resulting total government expenditures represent the real resources and the private sector consumption or investment spending those resources could have financed that must be foregone. There are no “free lunches,” as it has often been pointed out, and that applies to both what government borrows as much as what it more directly taxes to cover its outlays.
What makes deficit spending an attractive “path of least resistance” in the political process is precisely the fact that it enables deferring the decision of telling voter constituents by how much taxes would otherwise have to be increased, and upon whom they would fall, in the “here and now” to generate the additional revenue to pay for the spending that is financed through borrowing.
But as the recent fiscal problems in a number of member nations of the European Union have highlighted, eventually there are limits to how far a government can try to hide or defer the real costs of all that it is providing or promising through its total expenditures to various voter constituent groups. Standard & Poor’s recent decision to downgrade the U.S. government’s prospective credit rating to “negative” shows clearly that what is happening in parts of Europe can happen here.
And given current projections by the Congressional Budget Office, the deficits are projected to continue indefinitely into future years and decade, with the cumulative national debt nearly doubling from its present level. In addition, whether covered by taxes or deficit financing, these debt estimates do not include the federal government’s unfunded liabilities for Social Security and Medicare through most of the 21st century. In 2009, the Social Security and Medicare trust funds were estimated to have legal commitments under existing law for expenditures equal to at least $43 trillion over the next seventy-five years. Others have projected this unfunded liability of the United States government to be much higher – possibly over $100 trillion.
The Federal Reserve and the Economic Crisis
The responsibility for a good part of the current economic crisis must be put at the doorstep of America’s central bank, the Federal Reserve. By some measures of the money supply, the monetary aggregates (MZM or M-2) grew by fifty percent or more between 2003 and 2007. This massive flooding of the financial markets with huge amounts of liquidity provided the funds that fed the mortgage, investment, and consumer debt bubbles in the first decade of this century. Interest rates were pushed far below any historical levels.
For a good part of those five years, according to the St. Louis Federal Reserve Bank, the federal funds rate (the rate of interest at which banks lend to each other), when adjusted for inflation – the “real rate” – was either negative or well below two percent. In other words, the Federal Reserve supplied so much money to the banking sector that banks were lending money to each other for free for a good part of this time. It is no wonder that related market interest rates were also pushed way down during this period.
Market interest rates are supposed to tell the truth. Like any other price on the market, interest rates are suppose to balance the decision of income earners to save a portion of their income with the desire of others to borrow that savings for various investment and other purposes. In addition, the rates of interest, through the present value factor, are meant to limit investment time horizons undertaken within the available savings to successfully bring the investments to completion and sustainability in the longer-term.
Due to the Fed’s policy, interest rates were not allowed to do their “job” in the market place. Indeed, Fed policy made interest rates tell “lies.” The Federal Reserve’s “easy money” policy made it appear, in terms of the cost of borrowing, that there was more than enough real resources in the economy for spending and borrowing to meet everyone’s consumer, investment and government deficit needs far in excess of the economy’s actual productive capacity.
The housing bubble was indicative of this. To attract people to take out loans, banks not only lowered interest rates (and therefore the cost of borrowing), they also lowered their standards for credit worthiness. To get the money, somehow, out the door, financial institutions found “creative” ways to bundle together mortgage loans into tradable packages that they could then pass on to other investors. It seemed to minimize the risk from issuing all those sub-prime home loans, which we now see were really the housing market’s version of high-risk junk bonds. The fears were soothed by the fact that housing prices kept climbing as home buyers pushed them higher and higher with all of that newly created Federal Reserve money.
At the same time, government-created home-insurance agencies like Fannie Mae and Freddie Mac were guaranteeing a growing number of these wobbly mortgages, with the assurance that the “full faith and credit” of Uncle Same stood behind them. By the time the Federal government formally had to take over complete control of Fannie and Freddie in 2008, they were holding the guarantees for half of the $10 trillion American housing market.
Low interest rates and reduced credit standards were also feeding a huge consumer-spending boom that resulted in a 25 percent increase in consumer debt between 2003 and 2008, from $2 trillion to over $2.5 trillion. With interest rates so low, there was little incentive to save for tomorrow and big incentives to borrow and consume today. But, according to the U.S. Census Bureau, during this five-year period average real income only increased by at the most 2 percent. Peoples’ debt burdens, therefore, rose dramatically.
The easy money and government-guaranteed house of cards all started to come tumbling down in the second half of 2008. The Federal Reserve’s response was to open wide the monetary spigots even more than before the bubbles burst.
The Federal Reserve has dramatically increased its balance sheet by expanding its holding of U.S. government securities and private-sector mortgage-back securities to the tune of around $2.3 trillion. Traditional Open Market Operations plus its aggressive “quantitative easing” policy have increased bank reserves from $94.1 billion in 2007 to $1.3 trillion by April 2011, for a near fourteen-fold increase, and the monetary basis in general has expanded from $850.5 billion in 2007 to $2,242.9 billion in April of 2011, a 260 percent increase. The monetary aggregates, MZM and M-2, respectively, have grown by 28 percent and 21.6 percent over this same period.
In the name of supposedly preventing a possible price deflation in the aftermath of the economic boom, Fed policy has delayed and retarded the economy from effectively readjusting and re-coordinating the sectoral imbalances and distortions that had been generated during the bubble years. Once again interest rates have been kept artificially low. In real terms, the federal funds rate and the 1-year Treasury yield have been in the negative range since the last quarter of 2009, and at the current time is estimated to be below minus two percent.
This has prevented interest rates from informing market transactors what the real savings conditions are in the economy. So, once again, the availability of savings and the real cost of borrowing is difficult to discern so as to make reasonable and rational investment decisions, and not to foster a new wave of misdirected and unsustainable private sector investment and financial decisions.
The housing market has not been allowed to fully adjust, either. With so much of the mortgage-backed securities being held off the market in the portfolio of the Federal Reserve, there is little way to determine any real market-based pricing to determine their worth or their total availability so the housing market can finally bottom out with clearer information of supply and demand conditions for a sustainable recovery.
This misguided Fed policy has been, in my view, a primary factor behind the slow and sluggish recovery of the United States economy out of the current recession.
Federal Reserve Policy and Monetizing the Debt
Many times in history, governments have used their power over the monetary printing press to create the funds needed to cover their expenses in excess of taxes collected. Sometimes this has lead to social and economic catastrophes.
Monetizing the debt refers to the creation of new money to finance all or a portion of the government’s borrowing. Since the early 2008 to the present, Federal Reserve holdings of U.S. Treasuries have increased by about 240 percent, from $591 billion in March 2008 to $1.4 trillion in early May 2011, or a nearly $1 trillion increase. In the face of an additional $3.6 trillion in accumulated debt during the last three fiscal years, it might seem that Fed policy has “monetized” less than one-third of government borrowing during this period.
However, the Fed’s purchase of mortgage-backed securities, no less than its purchase of U.S. Treasuries, potentially increases the amount of reserves in the banking system available for lending. And since 2008, the Federal Reserve had bought an amount of mortgaged-backed securities that it prices on its balance sheet as being equal about $928 billion.
The $1.4 trillion increase in the monetary base since the end of 2007, from $850.5 billion to $2.2 trillion, has increased MZM measurement of the money supply by $2,161.1, or an additional $769 billion dollars in the economy above the increase in the monetary base. This is an amount that is 83 percent of the dollar value of the $927 billions in mortgage-backed securities.
Due to the “money multiplier” effect – that under fractional reserves, total new bank loans are potentially a multiple of the additional reserves injected into the banking system – it is not necessary for the Fed to purchase, dollar-for-dollar, every additional dollar of government borrowing to generate a total increase in the money supply that may be equal to the government’s deficit.
Thus, it can be argued that Fed monetary policy has succeeded, in fact, in generating an increase in the amount of money in the banking system that is equal to two-thirds of the government’s $3.6 trillion of new accumulated debt.
That the money multiplier effect has not been as great as it might have been, so far, is because the Federal Reserve has been paying interest to member banks to not lend their excess reserves. This sluggishness in potential lending has also been affected by the general “regime uncertainty” that continues to pervade the economy. This uncertainty concerns the future direction of government monetary and fiscal policy. In an economic climate in which it difficult to anticipate the future tax structure, the likely magnitude of future government borrowing, and the impact of new government programs, hesitancy exists on the part of both borrowers and lenders to take on new commitments.
But the monetary expansion has most certainly been the factor behind the worsening problem of rising prices in the U.S. economy and the significant fall in the value of the dollar on the foreign exchange markets.
The National Debt and Monetary Policy
It is hard for Americans to think of their own country experiencing the same type of fiscal crisis that has periodically occurred in “third world” countries. That type of government financial mismanagement is supposed to only happen in what used to be called “banana republics.”
But the fact is, the U.S. is following a course of fiscal irresponsibility that may lead to highly undesirable consequences. The bottom line truth is that over the decades the government – under both Republican and Democratic leadership – has promised the American people, through a wide range of redistributive and transfer programs and other on-going budgetary commitments, more than the U.S. economy can successfully deliver without seriously damaging the country’s capacity to produce and grow through the rest of this century.
To try to continue to borrow our way out of this dilemma would be just more of the same on the road to ruin. The real resources to pay for all the governmental largess that has been promised would have to come out of either significantly higher taxes or crowding out more and more private sector access to investment funds to cover continuing budget deficits. Whether from domestic or foreign lenders, the cost of borrowing will eventually and inescapably rise. There is only so much savings in the world to fund private investment and government borrowing, particularly in a world in which developing countries are intensely trying to catch up with the industrialized nations.
Interest rates on government borrowing will rise, both because of the scarcity of the savings to go around and lenders’ concerns about America’s ability to tax enough in the future to pay back what has been borrowed. Default risk premiums need not only apply to countries like Greece.
Reliance on the Federal Reserve to “print our way” out of the dilemma through more monetary expansion is not and cannot be an answer, either. Printing paper money or creating it on computer screens at the Federal Reserve does not produce real resources. It does not increase the supply of labor or capital – the machines, tools, and equipment – out of which desired goods and services can be manufactured and provided. That only comes from work, savings and investment. Not from more green pieces of paper with presidents’ faces on them.
However, what inflation can do is:
- Accelerate the devaluation of the dollar on the foreign exchange markets, and thereby disrupting trading patterns and investment flows between the U.S. and the rest of the world;
- Reduce the value, or purchasing power, of every dollar in people’s pockets throughout the economy as prices start to rise higher and higher;
- Undermine the effectiveness of the price system to assist people as consumers and producers in making rational market decisions, due to the uneven manner in which inflation impacts of some prices first and affects others only later;
- Potentially slow down capital formation or even generate capital consumption, as inflation’s uneven effects on prices makes it difficult to calculate profit from loss;
- Distort interest rates in financial markets, creating an imbalance between savings and investment that sets in motion the boom and bust of the business cycle;
- Create incentives for people to waste their time and resources trying to find ways to hedge against inflation, rather than devote their efforts in more productive ways that improve standards of living over time;
- Bring about social tensions as people look for scapegoats to blame for the disruptive and damaging effects of inflation, rather than see its source in Federal Reserve monetary policy;
- Run the risk of political pressures to introduce distorting price and wage controls or foreign exchange regulations to fight the symptom of rising prices, rather than the source of the problem – monetary expansion.
What is To Be Done?
The bottom line is, government is too big. It spends too much, taxes too heavily, and borrows too much. For a long time, the country has been trending more and more in the direction of increasing political paternalism. Some people argue, when it is proposed to reduce the size and scope of government in our society, that this is breaking some supposed “social contract” between government and “the people.”
The only workable “social contract” for a free society is the one outlined by the American Founding Fathers in the Declaration of Independence and formalized in the Constitution of the United States. This is a social contract that recognizes that all men are created equal, with governmental privileges and favors for none, and which expects government to respect and secure each individual’s right to his life, liberty, and honestly acquired property.
The reform agenda for deficit and debt reduction, therefore, must start from that premise and have as its target a radical “downsizing” of government. That policy should plan to reduce government spending across the board in every line item of the federal budget by 10 to 15 percent each year until government has been reduced in size and scope to a level and a degree that resembles, once again, the Founding Father’s conception of a free and limited government.
A first step in this fiscal reform is to not increase the national debt limit. The government should begin, now, living within its means – that is, the taxes currently collected by the Treasury. In spite of some of the rhetoric in the media, the U.S. need not run the risk of defaulting or losing its international financial credit rating. Any and all interest payments or maturing debt can be paid for out of tax receipts. What will have to be reduced are other expenditures of the government.
But the required reductions and cuts in various existing programs should be considered as the necessary “wake-up call” for everyone in America that we have been living far beyond our means. And as we begin living within those means, priorities will have to be made and trade-offs will have to be accepted as part of the transition to a smaller and more constitutionally limited government.
In addition, the power of monetary discretion must be taken out of the hands of the Federal Reserve. The fact is, central banking is a form of monetary central planning under which it is left in the hands of the members of the Board of Governors of the Federal Reserve to “plan” the quantity of money in the economy, influence the value or purchasing power of the monetary unit, and manipulate interest rates in the loan markets.
The monetary central planners who run the Federal Reserve have no more or greater knowledge, wisdom or ability that those central planners in the old Soviet Union. The periodic recurrence of the boom and bust of the business cycle demonstrates that there is no way for them to get it right – in spite of them saying, again and again, that “next time” they will get it right.
It is what the Nobel Prize-winning, Austrian economist, Friedrich A. Hayek, once called a highly misplaced “pretense of knowledge.” That is why in a wide agenda for reform, the goal should be to move towards a market-based monetary system, the first step in such an institutional change being a commodity-backed monetary order such as a gold standard.
And in the longer-run serious consideration must be given the possibilities of a monetary system completely privatized and competitive, without government control, management, or supervision.
The budgetary and fiscal crisis right now has made many political issues far clearer in people’s minds. The debt dilemma is a challenge and an opportunity to set America on a freer and potentially more prosperous track, if the reality of the situation is looked at foursquare in the eye.
Otherwise, dangerous, destabilizing, and damaging monetary and fiscal times may be ahead.
The 2011 Statistical Abstract: The National Data Book (Washington, D.C., U.S. Census Bureau, 2011), Table 669.
Richard M. Ebeling, Why Government Grow: The Modern Democratic Dilemma,” AIER Research Reports, Vol. LXXV, No. 14 (Great Barrington, MA: American Institute for Economic Research, August 4-18, 2008); James M. Buchanan and Richard E. Wagner, Democracy in Deficit: The Political Legacy of Lord Keynes (New York: Academic Press, 1977); and earlier, Henry Fawcett and Millicent Garrett Fawcett, Essays and Lectures on Social and Political Subjects (Honolulu, Hawaii: University Press of the Pacific, 2004), Ch. 6: “National Debts and National Prosperity,” pp. 125-153.
The Budget and Economic Outlook: Fiscal Years 2011 to 2021 (Washington, D.C.: Congressional Budget Office, January 27, 2011)
Richard M. Ebeling, “Brother, Can You Spare $43 Trillion? America’s Unfunded Liabilities,” AIER Research Reports, Vol. LXXVI, No. 3 (Great Barrington, MA: American Institute for Economic Research, March 2, 2009), pp. 1-3.
Michael D. Tanner, “The Coming Entitlement Tsunami.” April 6, 2010. http://www.cato.org/pub_display.php?pub_id=11666 (accessed May 5, 2011).
For more details, see, Richard M. Ebeling, “The Financial Bubble was Created by Central Bank Policy,” American Institute for Economic Research, November 5, 2008, http://www.aier.org/research/briefs/667-the-financial-bubble-was-created-by-central-bank-policy (accessed on May 5, 2011).
See, Richard M. Ebeling, “Market Interest Rates Need to Tell the Truth, or Why Federal Reserve Policy Tells Lies,” in Richard M. Ebeling, Timothy G. Nash, and Keith A. Pretty, eds., In Defense of Capitalism (Midland, MI: Northwood University Press, 2010) pp. 57-60; http://defenseofcapitalism.blogspot.com/2009/12/market-interest-rates-need-to-tell.html
Thomas Sowell, The Housing Boom and Bust (New York: Basic Books, 2010); Johan Norberg, Financial Fiasco (Washington, D.C.: Cato Institute, 2009).
Richard M. Ebeling, “Is Consumer Credit the Next Bomb in the Economic Crisis?” American Institute for Economic Research, October 22, 2008, http://www.aier.org/research/briefs/599-consumer-credit-the-next-qbombq-in-the-economic-crisis (accessed May 5, 2011).
Monetary Trends (St. Louis, MO: St. Louis Federal Reserve, May 2011)
See, Richard M. Ebeling, “The Hubris of Central Bankers and the Ghosts of Deflation Past” July 5, 2010, http://defenseofcapitalism.blogspot.com/2010/07/hubris-of-central-bankers-and-ghosts-of.html (accessed May 5, 2011)
See, Richard M. Ebeling, “The Lasting Legacies of World War I: Big Government, Paper Money, and Inflation,” Economic Education Bulletin, Vol. XLVIII, No. 11 (Great Barrington, MA: American Institute for Economic Research, November 2008), for a detailed example of the German and Austrian instances of monetary-financed inflationary destruction following the First World War.
See, Richard M. Ebeling, “The Cost of the Federal Government in a Freer America,” The Freeman: Ideas on Liberty (March 2007), pp. 2-3; http://www.thefreemanonline.org/from-the-president/the-cost-of-the-federal-government-in-a-freer-america/ (accessed May 5, 2011).
See, Richard M. Ebeling, “The Gold Standard and Monetary Freedom,” March 30, 2011, http://defenseofcapitalism.blogspot.com/2011/03/gold-standard-and-monetary-freedom-by.html
See, Richard M. Ebeling, “Real Banking Reform? End the Federal Reserve,” January 22, 2010, http://defenseofcapitalism.blogspot.com/2010/01/real-banking-reform-end-federal-reserve.html | 1 | 2 |
<urn:uuid:7dd57a4b-5d66-4e79-b78b-e6061cdbfc21> | As a child, Kenneth Eastwood, superintendent of the Enlarged City School District of Middletown (N.Y.) wanted to be a brain surgeon or even a fighter pilot. But when he grew older, he shadowed his father, a computer programmer, at his workplace in New York City and was amazed by the flickering lights and people milling around the monster computer machines that monopolized two and three floors.
His fascination didn't lead to a career in computers, but it did impact his direction. Eastwood, who now has a wall of awards and recognitions for his technology leadership in K12 schools, created environments so that children could learn the language of technological advances, or what is considered the new, critical literacies of the 21st century.
The new literacies is about online reading comprehension and learning skills required by the Internet and other information and communication technologies (ICTs), including content found on wikis, blogs, video and audio sites and in e-mail.
Eastwood knows the value of literacy. For the third year, Eastwood has mandated a literacy course for the Middletown district's sixth- through eighth-graders, on top of the four core subjects of English, math, science and social studies. And just this past fall, ninth-graders started taking literacy, a 45-minute course. Not only do students learn creative writing, listening, speaking and communication skills in class, but they learn how to read Web material, distinguishing authentic sites from bogus ones, and how to efficiently search the Internet. The district, which has a comprehensive literacy instructional model for every grade, provides teachers with a methodology of how to teach literacy and provides scope and sequences as to which materials to use that will best meet the needs of all students.
"My focus in life is around how do you create the environments that produce the greatest change and improvements for kids?" says Eastwood. "The reason I was able to bring about technological change and improvement has everything to do with working with people that were highly motivated and wonderful change agents. My job is to help develop a vision and facilitate progress toward those goals."
Eastwood started off in education as an adjunct instructor at the State University College at Potsdam, N.Y., teaching undergraduates a course in classroom and behavioral management. After working as a resource room teacher he became assistant to the personnel director at the Liverpool (N.Y.) Central School District. In the 1980s, he served as administrative assistant to the New York Senate majority leader, Warren Anderson, acting as a district liaison with special interest groups and constituents, and helped develop an effective schools proposal for the state Senate, which had positive impact on low performing districts. And he was executive assistant to U.S. Rep. George Wortley.
In the 1990s, he worked as an adjunct instructor teaching classroom management at State University College of New York at Oswego. He then went to Oswego City (N.Y.) School District, which has a 45 percent poverty rate, and worked as a director of secondary education and technology, then as assistant superintendent for curriculum, instruction and technology before becoming superintendent in 2001. He was credited for taking a technology-depressed district to one of national excellence. He also led teachers to implement the new literacies, such as analyzing text, in K12 classes. Student dropout rates decreased and SAT scores increased during his tenure at Oswego.
Eastwood left in 2004 in search of a new challenge. He took the job in the Middletown district, which has eight schools, lies 65 miles northwest of New York City, and is a high poverty (68 percent) and high minority district that is 33 percent black, 33 percent Hispanic and 33 percent white.
Soon after Eastwood started in Middletown, he conducted audits of all instructional programs, including technology, and found curricular and programmatic gaps. "We had to make changes to build the systems that would get us to a point where instructionally students were benefiting from their time in school," he says.
He shut down the science labs because research reveals that successful technology integration in general, including science, is best done in the classroom where teachers can integrate the instruction, versus taking students to a lab where another teacher instructs. He also upgraded the infrastructure with new wiring, more equipment, better technology and wireless capabilities. "There was some angst about it (among staff and administrators) as we were going backward, but we were planning for the future," he says.
Most buildings are wireless to advance computer use and allow security personnel monitors to use personal digital assistants so they can input or check information immediately on patrol. And the entire structure is built around Internet protocol systems in classrooms with all phones, announcements, clocks and emergency notifi cation systems connected and coordinated. The district also has 300 video surveillance cameras, which send digital pictures to the server. "Once it goes through the server, it's easily visible through access to our Web site for emergency purposes," Eastwood says.
When the technology overhaul is completed, which should be during the 2008-2009 school year, Eastwood says the district should have one computer for every three or four students in K12 classrooms. And in high school, about a quarter of the classrooms will have one-to-one ratios. "Although we want them to work in groups, they need to do individual work in the classroom," he says.
On top of creating a better infrastructure, Eastwood added three years ago two new teaching positions to help coach and support teachers in infusing technology into the curriculum. The teachers in these positions are called technology integration specialists. A third specialist will come on board in September.
"It's about teachers teaching teachers," he says. "Teachers can relate to teachers. They know the issues, they talk the lingo and they sympathize with the change that is necessary to be successful."
Technology specialists help teachers shift from merely giving lectures to students to integrating a variety of technologies in the classroom to meet 21st century skills, according to Amy Creeden, technology integration coach.
All areas of learning for today's students must have some form of interactability to be successful. If there is no interactability in lessons, students will undoubtedly tune out, Eastwood adds. "Teachers have to be so much on top of their games to create interactive environments," he says. "We're trying to look at the new literacies and see how we can use their devices" and ensure that today's forms of communication do not spoil state and federal requirements to create literate students, he adds.
With a $1.5 million grant from the federal Enhancing Education Through Technology program, 100 teachers, mostly in grades 6 through 12, were put into the district's SMART Board interactive whiteboards program, which includes using the Notebook software program in lessons. Teachers work on Dell laptops, use LCD projectors and integrate various software programs, such as BrainPOP, Inspiration Software's Kidspiration, and Tom Snyder Productions' Reading for Meaning, and Web sites, such as Discovery Education Streaming videos, into lessons.
With such changes, students are out of their seats, "taking charge of their own learning," and teachers are "thinking out of the box," Creeden adds.
Creeden credits Eastwood's leadership in helping the program grow by "leaps and bounds." "He really has such a commitment to new and emerging technologies, and he encourages us to move forward," she says.
Literacy Comes to Middletown
Even before Eastwood arrived in Middletown, he knew the importance of the new literacies in Oswego, where he was schools chief for three years at the turn of this century. Donald Leu, a new literacy guru and co-director of the New Literacies Research Lab at the University of Connecticut, visited the upstate New York district to give advice about teaching youngsters new literacies. "He was focused on using resources to develop the best instructional interventions and building them so that the classroom environment and the instruction were high quality," Eastwood recalls.
Leu impressed upon Eastwood that all children learn equally well. "If you take that approach, every child must have some prescriptive plan, and you have to make sure their individual learning needs are taken into effect when instruction is provided," Eastwood says. That's when the power of technology is clear, because one teacher cannot do it alone. He or she would struggle to provide "all the variances around instruction to meet the needs of all kids," Eastwood adds.
The Middletown district's mandated literacy course differs from the typical English course in that literacy focuses more on writing and communicating, as well as reading material online. And literacy is embedded in math problems, forcing students to explain their calculations or solutions. English, on the other hand, focuses more on the rules of instruction. Eastwood says his push for new, critical literacies is in part answering the need to change the way students think and learn in the 21st century, and it's also a reaction to the way in which students already learn and communicate through social networking sites, blogs and chat rooms.
Over the past few years, teachers had started seeing text-messaging lingo in written essays and projects from students, Eastwood says. "That's the scariest part of it," he says. "It's second nature [to students]. It's a behavior that is fully ingrained."
Aside from that, Eastwood wants to help students learn how to collaborate on projects-a 21st century must-have skill. "Given students' current way of networking, they have become isolated as well," he says. "They have a difficult time interacting with each other and working in teams."
For example, he often watches youngsters "communicating" on trains or in the cafeteria via their cell phones using text messages. They can literally just turn their bodies and speak to each other, but they choose to text instead, he says.
Linda Hatfield, the district's literacy coordinator, says the literacy program teaches students how to identify biases in online text, how to research information and investigate who it's being written for and for what purpose. "So they are looking at it and collectively making a decision, synthesizing it in regard to the topic they are researching," she says.
And students learn how to research so they don't get overwhelmed by the reams of information or hits from a Google search. "We teach them how to refi ne their search skills," Hatfi eld says.
Eastwood and Hatfield also encourage teachers of literacy to drill into students the idea that you cannot believe everything you read online in part because there are various sources contributing to various Web pages. "It's a process," Hatfi eld says.
"The vast majority of what they read online now tends to have false information in it," Eastwood adds. As a child, Eastwood recalls that encyclopedias and textbooks he used in school were nearly 100 percent accurate. But now, the amount of information online-for example, 20,000 articles for one subject-make verifying facts much more difficult, he says. "That's the problem," Eastwood says. "They have to be better adjudicators of fact to make sure what they're reading is true and valid."
Testing and Assessment
The literacy course is also held to high standards, following the same New York state testing pattern as English language arts, Eastwood says. "We believe that after four years, the literacy skills should have been or would be developed enough to perform successfully on state examinations," Eastwood says.
Teachers use their own assessments quarterly to see where students fall and if and where they need help, and once a year, students take fi nal exams and state exams.
Sue Short, Mechanicstown Elementary School principal, says students were struggling with state assessment exams in English language arts in the late 1990s. But Eastwood realized "quickly we were missing some key components that would help us become successful," Short recalls.
There was no scope or sequence in place, and teachers didn't know what they had to teach, she says. And they had no way of assessing student literacy skills. Students in grades K2 now undergo a primary literacy assessment that is aligned with DIBELS, or Dynamic Indicators of Basic Early Literacy Skills, a set of tests that focus on skills for learning to read.
"I think if we can get our kids to be literate-to read, write, speak and listen well-they can succeed in anything else they do," Eastwood says.
Like any great change, people will undoubtedly resist. It was no different at Middletown's schools when Eastwood overhauled the technology and literacy programs, among others. "There was a stage of chaos and complaining," Eastwood recalls. "Once we worked through it, things started to dramatically improve. The success of change always has to deal with the stick-to-it-iveness."
Before Eastwood became superintendent in 2004, Middletown High School, a high poverty and high minority school, was identifi ed under the No Child Left Behind law as needing improvement in 16 areas. But last year, the school met every state accountability standard. The school will stay off the list if it meets all accountability standards again this year, and both middle schools have dramatically improved their accountability status and could be off the list of needing improvement in another year, Eastwood adds.
"I think early on they thought, 'We'll never accomplish this,' " he recalls. "And now they believe they can do it."
Attendance is also up and above average, so the district is no longer in the bottom 10 percent statewide, he adds. The dropout rate has decreased to just shy of 7 percent from 17 to 25 percent prior to Eastwood's arrival. And there is a 36 percent increase in the number of students graduating in just four years compared to five years. Half the graduates are also heading to four-year colleges, when four years ago, only 48 out of 265 graduates did so, Eastwood adds.
Teachers have undergone summer and after-school in-service programs to grasp the literacy program. And any overtime beyond the school day is rewarded with stipends or an equivalent salary, which often comes from grant monies, Eastwood says. "Teachers have been wonderful in understanding the need for extra time," he says.
Selena Fischer, director of special education, attributes the district's successes to Eastwood's leadership and his push for literacy. This push catches elementary students who might normally fall behind and possibly be put in special education, Fischer adds. "He's a tough person to work for, and he's a breath of fresh air," she says, referring to his high expectations. "He has such an excellent knowledge base of curriculum, and he's good with transportation and buildings and grounds. He's a CEO of a large corporation and he's not afraid to allow his administration to expand their knowledge."
Time Is Now
Eastwood knows he has done the right thing with literacy, but wishes he had more time. "If we waited a long time to make change, you would have had many cohorts of students" not armed with literacy skills, he says. "That's a crisis for me. We can't wait. We have to take care of the kids in the system now."
With more time, he says, he could have better communicated how the literacy program could be taught, and have staff and teachers spend more time in understanding best practices. For now, he hopes to bring in an educational consultant to videotape master teachers in action, teaching the elements of best practices and providing helpful vignettes that work around new literacies and any type of instruction.
Hatfield describes Eastwood as "brilliant." "He is a leader who is in the forefront in technology and literacy," she says. "A lot of superintendents don't have a lot of knowledge in that area. And he truly values it and has a wealth of knowledge. I think he really wants what is best for kids and he wants to do what's current."
Angela Pascopella is senior features editor. | 1 | 6 |
<urn:uuid:846e1fee-5a3d-4e27-a10a-eac979cf2387> | This page gives a explanation of the several confusing
represenation of non-printable characters such as
\n \r \t \f and
^J ^M ^I ^L and
C-m RET <return> Enter ^m control-m 13 (?\C-m), and the various key syntax for
them in keyboarding related software (X11 modmap, emacs, OS X DefaultKeyBinding.dict, AutoHotkey, …), and how to type non-printable characters in emacs.
On , Will 〔schimpan…@gmx.de〕 wrote:
how can I find the an overview on how to enter meta-characters (⁖ esc, return, linefeed, tab, …) (a) in a regular buffer (b) in the minibuffer when using standard search/replace-functions (c) in the minibuffer when using search/replace-functions using regular expressions (d) in the .emacs file when defining keybindings As far as I can see in all those situations entering meta-characters is addressed in a different way which I find confusing, ⁖ (a) <key> _or_ C-q <key> (b) C-q C-[, C-q C-m, C-q C-j, C-q C-i (c) \e, \r, \n, \t (d) (define-key [(meta c) (control c) (tab c)] “This is confusing!”) Furthermore, they are displayed in a different way,⁖ - actual, visible layout - ^E, ^M, ^L, ^I - Octals I would be happy about pages summarizing such information. Any references available?
The issues involve non-printable chars, its representation, its input method, its input method representation, suppression of a key's normal function, and program language's need to represent non-printables in strings.
Here's a short summary:
quoted-insertcommand lets you type a char and suppress its normal function. For example, if you need to insert a literal Tab or newline char in minibuffer.
(global-set-key [(control b)] 'cmd)and other variations is emacs's syntax to represent keystrokes in elisp. A syntax for key strokes is necessary because keys are not ASCII chars (for example, function keys, Home key). For historical reasons, elisp has several syntaxes to represent the same keystrokes.
The following is a detailed explanation.
Your first item:
The 【Ctrl+q】 (holding the Control key down then type q) is the keyboard shortcut to invoke the command
quoted-insert. After this command is invoked, the key press on your keyboard will force emacs to insert a character represented by that key, and suppress that key's normal function.
For example, if you are doing string replacement, and you want to replace tabs by returns. When emacs prompts you to type a string to replace, you can't just press the Tab ↹ key, because the normal function of a tab key in emacs will try to do a command completion. (and in other Applications, it usually switches you to the next input field) So, here you can do 【Ctrl+q】 first, then press the Tab ↹ key. Similarly, you can't type the Return ↩ key and expect it to insert a newline character, because normally the Return ↩ key will activate the OK button or signal “end of input”.
This input mechanism usually doesn't exist in other text editors. In other text editors, when you want to enter the ASCII Tab character or Carriage Return character in some pop-up dialog, you often use a special representation such as
/r instead. Or, sometimes, by holding down the mouse, then press the key. Or, they simply provide a graphical menu or check box to let you select the special characters. The need to input character literally, is frequently needed in keyboard macro apps. (See: Mac OS X Keyboard Software ◇
Windows Keyboard Software.)
Ctrl+q Ctrl+[, Ctrl+q Ctrl+m, Ctrl+q Ctrl+j, Ctrl+q Ctrl+i
Here, the 【Ctrl+[】, 【Ctrl+m】, 【Ctrl+j】 etc key-press combinations, are methods to input non-printable characters that may not have a corresponding key on the keyboard.
For example, suppose you want to do string replacement, by replacing Carriage Return (ASCII 13) by Line Feed (ASCII 10). Depending what is your operating system and software, usually your keyboard only has a key that corresponds to just one of these characters. But now with the special method to input non-printable characters, you can insert any of the non-printable characters.
When speaking of non-printable characters, implied in the context is some standard character set. Implicitly, we are talking about ASCII, and this applies to emacs. Now, in ASCII, there are about 30 non-printable characters. Each of these is given a standard abbreviation, and several representations for different purposes. For example, ASCII 13 is the “Carriage return” character, with standard abbreviation code CR, and “^M” as its control-key-input representation. (M being the 13th of the English alphabet), and Control-m is the conventional means to input the character, and the conventional method to indicate a control key combination is by using the caret “^” followed by the character.
For the full detail, look at the table in the wikipedia article: ASCII. Here's a excerpt of the non-printable ASCII chars table.
|1||01||SOH||␁||^A||Start of Header|
|2||02||STX||␂||^B||Start of Text|
|3||03||ETX||␃||^C||End of Text|
|4||04||EOT||␄||^D||End of Transmission|
|16||10||DLE||␐||^P||Data Link Escape|
|17||11||DC1||␑||^Q||Device Control 1 (oft. XON)|
|18||12||DC2||␒||^R||Device Control 2|
|19||13||DC3||␓||^S||Device Control 3 (oft. XOFF)|
|20||14||DC4||␔||^T||Device Control 4|
|23||17||ETB||␗||^W||End of Trans. Block|
|25||19||EM||␙||^Y||End of Medium|
In general, the practical issues involved for a non-printable character, in the context of a programing language for text editing, are:
(Note: Emacs has several input methods to enter any non-printable chars in Unicode. See Emacs & Unicode Tips.)
\e, \r, \n, \t
There are good reasons that these are preferred than a literal or the more systematic caret notation. Here are some reasons:
In the above, we discussed non-printable chars:
However, emacs also need a system to represent keystrokes (as used in its keyboard macro system and keybinding).
Keystroke notation is not just a sequence of characters. For example, the F1 key isn't a character. The Alt modifier key, isn't a character nor is it a function in one of ASCII's non-printable character. There's also key combinations (⁖ 【Ctrl+Alt+↑】) and key sequences (⁖ 【F1 f】). The keys on the number keypad, need a different representation than the ones on the main keyboard section.
Emacs's key notation is rather confusing, due to historical reasons from 1980s.
Here are examples of multiple representation for the same keystroke (tested in emacs 22):
; equivalent code for a single keystroke (global-set-key "b" 'beep) (global-set-key 'beep) (global-set-key [?b] 'beep) (global-set-key [(?b)] 'beep) (global-set-key (kbd "b") 'beep) ; equivalent code for a named special key: Enter (global-set-key "\r" 'beep) (global-set-key [?\r] 'beep) (global-set-key 'beep) (global-set-key [(13)] 'beep) (global-set-key [return] 'beep) (global-set-key [?\^M] 'beep) (global-set-key [?\^m] 'beep) (global-set-key [?\C-M] 'beep) (global-set-key [?\C-m] 'beep) (global-set-key [(?\C-m)] 'beep) (global-set-key (kbd "RET") 'beep) (global-set-key (kbd "<return>") 'beep) ; equivalent code for binding 1 mod key + 1 letter key: Meta+b (global-set-key "\M-b" 'beep) (global-set-key [?\M-b] 'beep) (global-set-key [(meta 98)] 'beep) (global-set-key [(meta b)] 'beep) (global-set-key [(meta ?b)] 'beep) (global-set-key (kbd "M-b") 'beep) ; equivalent code for binding 1 mod key + 1 special key: Meta+Enter (global-set-key [M-return] 'beep) (global-set-key [\M-return] 'beep) (global-set-key [(meta return)] 'beep) (global-set-key (kbd "M-<return>") 'beep) ; equivalent code for binding Meta + cap letter key: Meta Shift b (global-set-key (kbd "M-B") 'beep) (global-set-key "\M-\S-b" 'beep) (global-set-key "\S-\M-b" 'beep) (global-set-key "\M-B" 'beep) (global-set-key [?\M-S-b] 'beep) ; invalid-read-syntax (global-set-key [?\M-?\S-b] 'beep) ; invalid-read-syntax (global-set-key [?\M-\S-b] 'beep) ; compile but no effect (global-set-key [?\M-B] 'beep) (global-set-key [\M-B] 'beep) ; compile but no effect (global-set-key [(meta shift b)] 'beep) (global-set-key [(shift meta b)] 'beep) (global-set-key (kbd "M-B") 'beep) (global-set-key (kbd "M-S-b") 'beep) ; compile but no effect ; Meta + shifted symbol key. (global-set-key (kbd "M-@") 'beep) ; good (global-set-key (kbd "M-S-2") 'beep) ; compile but no effect ; to do: show examples of key sequences
Note: keystroke notation is not a new concept. Here are some examples of syntax from different keyboard related software:
One of emacs's quirk is that its character data type are simply integers. So, a character “c” is just the integer 99 in emacs lisp. Now, elisp has a special read syntax for chars, so that the letter “c” in lisp can also be written as
?c instead of
99. This way, it is easier for programers to insert a character data in their program, and easier to read too. A backslash can be added in front of the char, so that
?' can be written as
?\'. This syntax is introduced in part so that Emacs's editing commands don't get confused (because the apostrophe is lisp syntax to quote symbols). Many of the control characters in ASCII also have a backslash representation. Here's a table from the elisp manual:
(info "(elisp) Character Type").
?\a ⇒ 7 ; control-g, C-g ?\b ⇒ 8 ; backspace, <BS>, C-h ?\t ⇒ 9 ; tab, <TAB>, C-i ?\n ⇒ 10 ; newline, C-j ?\v ⇒ 11 ; vertical tab, C-k ?\f ⇒ 12 ; formfeed character, C-l ?\r ⇒ 13 ; carriage return, <RET>, C-m ?\e ⇒ 27 ; escape character, <ESC>, C-[ ?\s ⇒ 32 ; space character, <SPC> ?\\ ⇒ 92 ; backslash character, \ ?\d ⇒ 127 ; delete character, <DEL>
So, the character tab (ASCII 9), can be represented in elisp as a character type data as:
Here's more quote from the manual:
Control characters may be represented using yet another read syntax. This consists of a question mark followed by a backslash, caret, and the corresponding non-control character, in either upper or lower case. For example, both `?\^I' and `?\^i' are valid read syntax for the character C-i, the character whose value is 9.
Instead of the `^', you can use `C-'; thus, `?\C-i' is equivalent to `?\^I' and to `?\^i':
?\^I ⇒ 9 ?\C-I ⇒ 9
… The read syntax for meta characters uses `\M-'. For example, `?\M-A' stands for M-A. You can use `\M-' together with octal character codes (see below), with `\C-', or with any other syntax for a character. Thus, you can write M-A as `?\M-A', or as `?\M-\101'. Likewise, you can write C-M-b as `?\M-\C-b', `?\C-\M-b', or `?\M-\002'.
So now, the tab char can be any of:
9 ?\t ?\^i ?\^I ?\C-i ?\C-I
(info "(elisp) Key Sequences")
Thanks to diszno for a correction on | 1 | 2 |
<urn:uuid:275802a7-671b-4c31-ad33-8e5fa871afd1> | The origins of Apple's successful OS
Ten years after its beta debuted, we look at where the ultra-successful Apple Mac OS X operating system came from
From NeXTSTEP to Rhapsody
It wasn't long before Jobs found himself in the driver's seat of Apple as interim CEO. He appointed his trusted and accomplished NeXT brethren to important posts within Apple, including Avie Tevanian, who became vice president of software engineering. Jobs cut dead weight in stagnant product lines and steered Apple toward calmer seas.
Apple engineers quickly began work on a new OS for Apple based on an older one: they used NeXTSTEP 4.2 as the starting point and began a three year process of Apple-isation that would transform the advanced but generally unknown UNIX-based OS into a consumer operating system that anyone could use. The project gained a code name – Rhapsody - which stemmed from Apple's mid-1990s penchant for using classical music themed names for OS prototypes.
The goal of Rhapsody was to take the NeXTSTEP's robust foundations and overlay a look and feel that would be familiar to long time users of the old Mac OS while also retaining some measure of backward compatibility. It wasn't long before Apple developed a prototype that functioned mostly like NeXTSTEP but possessed graphical elements borrowed from the 'Platinum' theme of Mac OS 8. Apple put this version, called Rhapsody Developer Release, into the hands of developers in August 1997 so they could begin porting software over in preparation for the great OS transition.
But all was not well. Apple met significant resistance to the new OS from Adobe www.adobe.com/uk , a key developer who produced graphic design tools that were so vital to the design-centric Mac user base. Apple originally wanted to channel all new development for Rhapsody through a programming system they called 'Yellow Box', which was essentially an updated version of the OPENSTEP development environment from the NeXTSTEP days.
Yellow Box would have allowed applications developed for Rhapsody to be easily ported to other operating systems (like Windows) and even between processor architectures like PowerPC and x86. Unfortunately, developers would have had to abandon any investment they put into building Classic OS applications; all Rhapsody versions of Mac software would need to be re-coded from scratch.
Adobe balked at Apple's plan for Yellow Box and refused to port its software over to Rhapsody. This lack of support from a key third party developer, in addition to grumblings from other developers, ultimately sent Apple back to the drawing board, and after a few more developer-only revisions, Apple pulled the plug on their original Rhapsody plan in 1998.
Rhapsody wasn't truly dead, however. In its place came murmurs about 'Mac OS X' (X being the roman numeral for 10, making it the clear successor to a planned classic OS release). Under the name Mac OS X Server 1.0, Apple released the first and only commercial version of Rhapsody in March of 1999. It retained the classic platinum interface of OS 8 (and the Rhapsody prototypes) but its heart beat with the rhythm of NeXTSTEP.
NEXT PAGE: Enter OS X
- Laptop buying advice
- See all laptop reviews
- Group test: what's the best Budget Laptop?
- Group test: what's the best £1,000 laptop? | 1 | 2 |
<urn:uuid:f1dc105b-67b6-4e7b-9f47-227efe7854f0> | An equivalent definition is that any chordless cycle has at most three nodes. In other words, a chordal graph is a graph with no induced cycles of length more than three.
Chordal graphs are a subset of perfect graphs. They are sometimes also called rigid circuit graphs or triangulated graphs. (The latter term is sometimes erroneously used for plane triangulations; see maximal planar graphs.)
Perfect elimination and efficient recognition
A perfect elimination ordering in a graph is an ordering of the vertices of the graph such that, for each vertex v, v and the neighbors of v that occur after v in the order form a clique. A graph is chordal if and only if it has a perfect elimination ordering (Fulkerson & Gross 1965).
Rose, Lueker & Tarjan (1976) (see also Habib et al. 2000) show that a perfect elimination ordering of a chordal graph may be found efficiently using an algorithm known as lexicographic breadth-first search. This algorithm maintains a partition of the vertices of the graph into a sequence of sets; initially this sequence consists of a single set with all vertices. The algorithm repeatedly chooses a vertex v from the earliest set in the sequence that contains previously unchosen vertices, and splits each set S of the sequence into two smaller subsets, the first consisting of the neighbors of v in S and the second consisting of the non-neighbors. When this splitting process has been performed for all vertices, the sequence of sets has one vertex per set, in the reverse of a perfect elimination ordering.
Since both this lexicographic breadth first search process and the process of testing whether an ordering is a perfect elimination ordering can be performed in linear time, it is possible to recognize chordal graphs in linear time. The graph sandwich problem on chordal graphs is NP-complete (Bodlaender, Fellows & Warnow 1992), whereas the probe graph problem on chordal graphs has polynomial-time complexity (Berry, Golumbic & Lipshteyn 2007).
The set of all perfect elimination orderings of a chordal graph can be modeled as the basic words of an antimatroid; Chandran et al. (2003) use this connection to antimatroids as part of an algorithm for efficiently listing all perfect elimination orderings of a given chordal graph.
Maximal cliques and graph coloring
Another application of perfect elimination orderings is finding a maximum clique of a chordal graph in polynomial-time, while the same problem for general graphs is NP-complete. More generally, a chordal graph can have only linearly many maximal cliques, while non-chordal graphs may have exponentially many. To list all maximal cliques of a chordal graph, simply find a perfect elimination ordering, form a clique for each vertex v together with the neighbors of v that are later than v in the perfect elimination ordering, and test whether each of the resulting cliques is maximal.
The largest maximal clique is a maximum clique, and, as chordal graphs are perfect, the size of this clique equals the chromatic number of the chordal graph. Chordal graphs are perfectly orderable: an optimal coloring may be obtained by applying a greedy coloring algorithm to the vertices in the reverse of a perfect elimination ordering (Maffray 2003).
Minimal separators
In any graph, a vertex separator is a set of vertices the removal of which leaves the remaining graph disconnected; a separator is minimal if it has no proper subset that is also a separator. According to a theorem of Dirac (1961), chordal graphs are graphs in which each minimal separator is a clique; Dirac used this characterization to prove that chordal graphs are perfect.
The family of chordal graphs may be defined inductively as the graphs whose vertices can be divided into three nonempty subsets A, S, and B, such that A ∪ S and S ∪ B both form chordal induced subgraphs, S is a clique, and there are no edges from A to B. That is, they are the graphs that have a recursive decomposition by clique separators into smaller subgraphs. For this reason, chordal graphs have also sometimes been called decomposable graphs.
Intersection graphs of subtrees
From a collection of subtrees of a tree, one can define a subtree graph, which is an intersection graph that has one vertex per subtree and an edge connecting any two subtrees that overlap in one or more nodes of the tree. Gavril showed that the subtree graphs are exactly the chordal graphs.
A representation of a chordal graph as an intersection of subtrees forms a tree decomposition of the graph, with treewidth equal to one less than the size of the largest clique in the graph; the tree decomposition of any graph G can be viewed in this way as a representation of G as a subgraph of a chordal graph. The tree decomposition of a graph is also the junction tree of the junction tree algorithm.
Relation to other graph classes
Split graphs are graphs that are both chordal and the complements of chordal graphs. Bender, Richmond & Wormald (1985) showed that, in the limit as n goes to infinity, the fraction of n-vertex chordal graphs that are split approaches one.
Ptolemaic graphs are graphs that are both chordal and distance hereditary. Quasi-threshold graphs are a subclass of Ptolemaic graphs that are both chordal and cographs. Block graphs are another subclass of Ptolemaic graphs in which every two maximal cliques have at most one vertex in common. A special type is windmill graphs, where the common vertex is the same for every pair of cliques.
Strongly chordal graphs are graphs that are chordal and contain no n-sun (n≥3) as induced subgraph.
K-trees are chordal graphs in which all maximal cliques and all maximal clique separators have the same size. Apollonian networks are chordal maximal planar graphs, or equivalently planar 3-trees. Maximal outerplanar graphs are a subclass of 2-trees, and therefore are also chordal.
Chordal graphs are a subclass of the well known perfect graphs. Other superclasses of chordal graphs include weakly chordal graphs, odd-hole-free graphs, and even-hole-free graphs. In fact, chordal graphs are precisely the graphs that are both odd-hole-free and even-hole-free (see holes in graph theory).
Every chordal graph is a strangulated graph, a graph in which every peripheral cycle is a triangle, because peripheral cycles are a special case of induced cycles. Strangulated graphs are graphs that can be formed by clique-sums of chordal graphs and maximal planar graphs. Therefore strangulated graphs include maximal planar graphs.
- Bender, E. A.; Richmond, L. B.; Wormald, N. C. (1985), "Almost all chordal graphs split", J. Austral. Math. Soc., A 38 (2): 214–221, doi:10.1017/S1446788700023077, MR 0770128.
- Berry, Anne; Golumbic, Martin Charles; Lipshteyn, Marina (2007), "Recognizing chordal probe graphs and cycle-bicolorable graphs", SIAM Journal on Discrete Mathematics 21 (3): 573–591, doi:10.1137/050637091.
- Bodlaender, H. L.; Fellows, M. R.; Warnow, T. J. (1992), "Two strikes against perfect phylogeny", Proc. of 19th International Colloquium on Automata Languages and Programming.
- Chandran, L. S.; Ibarra, L.; Ruskey, F.; Sawada, J. (2003), "Enumerating and characterizing the perfect elimination orderings of a chordal graph", Theoretical Computer Science 307 (2): 303–317, doi:10.1016/S0304-3975(03)00221-4.
- Dirac, G. A. (1961), "On rigid circuit graphs", Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg 25: 71–76, doi:10.1007/BF02992776, MR 0130190.
- Fulkerson, D. R.; Gross, O. A. (1965), "Incidence matrices and interval graphs", Pacific J. Math 15: 835–855.
- Gavril, Fănică (1974), "The intersection graphs of subtrees in trees are exactly the chordal graphs", Journal of Combinatorial Theory, Series B 16: 47–56, doi:10.1016/0095-8956(74)90094-X.
- Golumbic, Martin Charles (1980), Algorithmic Graph Theory and Perfect Graphs, Academic Press.
- Habib, Michel; McConnell, Ross; Paul, Christophe; Viennot, Laurent (2000), "Lex-BFS and partition refinement, with applications to transitive orientation, interval graph recognition, and consecutive ones testing", Theoretical Computer Science 234: 59–84, doi:10.1016/S0304-3975(97)00241-7.
- Maffray, Frédéric (2003), "On the coloration of perfect graphs", in Reed, Bruce A.; Sales, Cláudia L., Recent Advances in Algorithms and Combinatorics, CMS Books in Mathematics 11, Springer-Verlag, pp. 65–84, doi:10.1007/0-387-22444-0_3, ISBN 0-387-95434-1 More than one of
- Patil, H. P. (1986), "On the structure of k-trees", Journal of Combinatorics, Information and System Sciences 11 (2–4): 57–64, MR 966069.
- Rose, D.; Lueker, George; Tarjan, Robert E. (1976), "Algorithmic aspects of vertex elimination on graphs", SIAM Journal on Computing 5 (2): 266–283, doi:10.1137/0205021.
- Dirac (1961).
- Weisstein, Eric W., "Triangulated Graph", MathWorld.
- Peter Bartlett. "Undirected Graphical Models: Chordal Graphs, Decomposable Graphs, Junction Trees, and Factorizations:".
- Patil (1986).
- Seymour, P. D.; Weaver, R. W. (1984), "A generalization of chordal graphs", Journal of Graph Theory 8 (2): 241–251, doi:10.1002/jgt.3190080206, MR 742878. | 1 | 5 |
<urn:uuid:eef6b2b8-e02d-48cf-9deb-61b1b79f0bc9> | The following are ten servers that have enabled IT world-beaters to develop technological advancements that transformed the way we work and live. While the people behind the systems are the real brains, this list highlights the hardware they relied on.
Without further ado, in no particular order, here’s our list of the Ten Servers that Changed the World.
The Sun Ultra 2 may seem like an unlikely candidate to make this list, but it steps up as the server which first hosted Larry Page and Sergey Brin’s Backrub search engine – which, of course, eventually evolved into Google.
In 1998, Backrub was hosted on a Sun Ultra 2 with dual 200Mhz CPUs and 256MB of RAM at Stanford University. The famous image of the computer case partially made up of legos (pictured) isn’t actually the Backrub server, but rather its enclosure for external storage. (There were also a couple of Intel Servers and an IBM RS/6000 F50 in their network.)
This is quite a humble beginning, considering there are now over 450,000 servers in Google’s datacenters around the world. The simplicity of its search engine and its relative results blew away their competitors. (And it all started on an Ultra 2.)
Further Reading on the Sun Ultra 2 and Google Servers:
NeXT and its NeXTCube are often cited as infamous flops. NeXT was a bold company, led by Steve Jobs during his Apple sabbatical, that didn’t quite live up to expectations. Despite its shortcomings, the NeXTCube will always have a place in history as the very first web server.
The World Wide Web was born on a NeXTCube with a 25Mhz CPU, 2GB of disk and a gray scale monitor. Sir Tim Berners-Lee put the first web page online on August 6, 1991 while working for CERN in Geneva Switzerland. He designed the first web browser and editor, WorldWideWeb, on the NeXTSTEP OS. Berners-Lee continues to shape the web world as the founder of the W3C (World Wide Web Consortium), a researcher at MIT and recently as an advocate for the protection of Net Neutrality.
In 1996, Apple Computer acquired NeXT – and several components of the NeXTStep OS would be crucial in the development of Mac OS X. Sun Microsystems had also made investments in NeXT and ported some of the OS’s components into the PA-RISC SPARC systems. Incidentally, the NeXTCube was also used by John Carmack to develop the games Wolfenstein 3D and Doom.
Further Reading on the NeXTCube:
- The NeXTonian
- Tim Berners-Lee Homepage (note the Kevin Spacey resemblance)
- Steve Jobs and NExT Video
While Alexander Graham Bell’s first telephone call was clearly documented as “Mr. Watson, come here,” the world will never know what, exactly, was transmitted between two side-by-side DEC PDP-10 nodes in 1971.
The legendary Ray Tomlinson of BBN, sent the first email over these nodes via the ARPANET network, but contends that he doesn’t recall the characters he first transmitted, stating that it was “something like ‘QWERTYUIOP.” Nonetheless, network email was born, and just over two decades later became the foundation for electronic communication, breaking down barriers and flattening the world. We can also thank Ray for bringing the “@” symbol into our daily use, as he decided to assign it as the unique character within email addresses.
Unlike the PDP-10 pictured here, Tomlinson’s PDP-10s did not have any type of monitor or green screen, but rather would output to a printer reel. The PDP-10 was a great success for DEC and was eventually used by companies like Microsoft, who developed several versions of the BASIC language on it. The model was widely adopted by universities, and in fact, even Bill Gates learned on one in college. And Gates’ counterpart, Paul Allen, seems to have a place in his heart for them as well – owning a working model in his personal collection, documented online at PDP-Planet. The CGI for the movie TRON was rendered on a PDP-10, too.
Those interested in testing their programs in 36-bit goodness can find several PDP emulators on the net to play with.
Further Reading on the DEC PDP-10:
- Columbia University Info page
- Ray Tomlinson’s Personal Page at BBN
- The Computer History Museum – 1970s Internet History
- INWAP’s PDP-10 Information Galore
Before the internet became “a series of tubes,” SAGE, the first fully operational wide-scale network, actually was.
SAGE was designed by IBM at MIT in 1956 for the AirForce. It was based on several of the IBM AN/FSQ-7 Intercept computers, and performed as an air defense system. Each AN/FSQ-7 used 55,000 vacuum tubes and occupied almost a 1/2 acre of datacenter space. It was the biggest computer in history and its size will most likely never be surpassed.
The AN/FSQ-7 Intercept was a 32-bit dual processor system with hot-pluggable power supplies, a modem and sold for $238 million. It turns out that SAGE probably wouldn’t have worked for its intended purpose of air defense, but the AN/FSQ-7s stayed in production until at least 1985. They served well for air traffic control and were also a popular backdrop for Hollywood command centers.
Further Reading on SAGE and the AN/FSQ-7:
When world champion chess player Garry Kasparov lost a chess match to IBM’s Deep Blue computer on February 10, 2006, the world was at attention.
Deep Blue ran on the AIX OS and was built on a 32-node RS/6000 SP RISC system. It could generate 200 positions per second and rank the “goodness” of each one. It didn’t necessarily create a new technology or make significant advances towards one… so how did it make this list?
Over 100 years, earlier the industrial revolution had begun to make manual labor more efficient and reduce opportunities for man. Fears manifested in fables such as that of John Henry, who represented the best laborer man could offer versus machine. Now in the 1990s, automation seems not only a threat to hard labor, but also to those who use their brains instead of their bodies.. I believe Deep Blue versus Kasparov became more than a marketing event for IBM; it turned into a modern day fable representing our collective fears of what technology could accomplish, and therefore what it could take away. Of course, this fable is exaggerated because new opportunities will inevitably arise with new technological developments.
As an interesting side note, there are theories that Deep Blue may have had some help. Kasparov believes the machine did not act appropriately, and other research has shown intriguing evidence as such. IBM denies any interference.
Further Reading on Deep Blue and the IBM RS/6000 SP Node:
In 1965, two servers on opposite coasts were networked together, driving home the golden spike in a transcontinental wide area network.
Thomas Marill came up with a strategy to connect distant computers and transfer data across telephone wires. Marill then hooked up with Larry Roberts and ARPA to make it happen.
The Lincoln TX-2 at the Lincoln labs in MIT, designed by Wesley Clark, was connected with an IBM Q-32 (AN/FSQ-32) in Santa Monica, California at SDC (System Development Corporation) Headquarters. In 1966, Marill and Roberts documented their experiment and co-wrote Tward a Cooperative Network of Time-Shared Computers.
Further Reading on the Lincoln TX2 and IBM Q32:
When the first LISTSERV was created in 1981, the doors opened up to group email collaboration. (Not to mention list spam, off-topic discussions and flame wars.)
The original LISTSERV was hosted on an IBM VM mainframe over BITNET (Because It’s Time NETwork). BITNET would later incorporate DEC VAX systems into its network as well. Ira H. Fuchs of CUNY and Greydon Freeman of Yale decided to connect their universities using a leased telephone circuit between their mainframes.
By 1982, BITNET reached across the US and into Europe, creating a worldwide network. The network peaked in connecting over 1400 organizations in 49 countries, but would sharply decline from here on out due to the growth of the internet.
Further Reading on BITNET and the IBM VM Mainframe:
The DEC PDP-7 was released in 1965, but it was in 1969 when Ken Thompson of Bell Labs and his team would develop the Unix OS. He would have liked to have gotten his hands on a PDP-10 or an SDS Sigma 7, but funding was refused, so the PDP-7 had to suffice.
In brief, Thompson was familiar with the MULTICS OS and had been developing a game called Space Travel on it. He wanted to develop advanced functions such as rotating planets that didn’t seem possible in the current iteration of MULTICS. He was inspired to come up with a new OS that could be programmed on the PDP-7 and his team dubbed it UNICS (an emasculated MULTICS). The name obviously evolved into “Unix,” as did the OS itself once it was developed on more advanced systems such as the PDP-11.
Due to the fact that UNIX was developed on the PDP-7 and its printer reel output (with no monitors or terminals), it still remains true that UNIX is composed of very sparse commands and responses.
Further Reading on the DEC PDP-7:
We could have gone with the XBOX 360 here, due to the number of people who are hacking Linux onto it, but the fact that the Playstation 3 is going to support and distribute Linux gave it the edge. There are also programs already shaping up to use the PS3 as a wide scale distributed computing system.
Although it is yet to be released to the general public, the PS3 looks like it has the potential to put server power in the hands of thousands who haven’t had it before. The distributed computing options will also supply additional processor nodes to those networks that need all they can get, such as SETI or Folding@home.
The original users of those systems above, such as the 1/2 acre SAGE system, must be blown away by the processing power that is packed into this home console such as…
- 3.2Ghz Cell Broadband Engine CPU
- 60GB ATA Hard Drive
- 256MB RAM
- 550Mhz RSX Graphics Processing Unit
- Built-in Network Capabilities
Disclaimer: I don’t work for or represent any of the brands above, but if Sony wants to send Vibrant a PS3 for the office, we’re just fine with that!
Further Reading on the Sony Playstation 3:
I’m hoping that this is just the beginning of the discussion, so please let us know your thoughts in the comment section below.
Any glaring omissions?
A system you would take off of the list? | 1 | 4 |
<urn:uuid:f064712c-a36e-4b67-a2f8-f17e4966f2dd> | |HOME | STAMP STORE | STAMP&PHILATELY | TURKISH STAMPS | CATALOG | SUBSCRIPTION | ABOUT US|
|You can search for products here.(stamps'name, date of issued, price etc.)||
The history of Turkish Stamps
The History of Turkish Postal Stamps
The Ottoman Empire reached its furthest extent in 1648 when its sultan ruled from the gates of Vienna to the Persian Gulf and included within his dominions the coasts of North Africa and the Black Sea. Only a decisive defeat in Malta had blocked the way still further west. After defeat at the hands of Catherine the Great of Russia decline was continuous from 1774 to the end of World War I. By the time of the first stamp issue (1863) the Ottoman Empire still comprised most of the Balkans (except southern Greece) as far as the Danube, and much of the Near East. Successive issues, therefore, were used in an ever-contracting area as territories were lost.
FIRST STAMPS ISSUED May 1863
1863, 40 paras = 1 piastre.
1929, 40 paras = 1 kurus.
100 kurus = 1 lira (TD).
1942,100 paras = 1 Kurus.
1947,100 kurus = 1 lira.
The gradual erosion of the Ottoman Empire was hastened by World War I. By 1919, Turkey in Asia was reduced to its present boundaries, except for some difficulties in the establishment of the Syrian border, which was not finalised until 1939. The collapse of internal government allowed Greece to invade through Izmir (Smyrna) in 1919, but the Greeks were repulsed by a reconstituted Turkish Army led by Kemal Ataturk. A confrontation with the Western Allies at Charnak in 1923 was avoided and Ataturk welded the factions of the nation into a single unit. Turkey remained neutral for most of World War II, but took part in the Korean War and subsequently became part of NATO.
Austrian POs in Ottoman Empire(in Turkey)
FIRST STAMPS Turkish 1863
FIRST STAMPS ISSUED 1 June 1867.
1863, 100 Soldi = 1 Florin.
An overland courier service established after the Peace of Passarowitz (1721) was recognized in 1739. In 1748 an Austrian P0 was set up in Galata separately from the Istanbul embassy, and the service extended to Smyrna. For POs with dates see map. After 1836 mail was carried by the Austrian Lloyd Steam Navigation Company, based in Trieste, which operated TPOs and whose agents acted as postmasters.
Used stamps of Lombardy-Venetia ('Austrian Italy') in 1863-7: dates of issue:
ROPiT (Russian POs in Ottoman Empire -in Turkey)
FIRST STAMPS Russian, November Kasım 1862.
FIRST STAMPS ISSUED 1 January 1863.
1863, as Russia.
1900, 40 paras = 1 piastre.
Though Russian consular couriers carried despatches between Istanbul and St Petersburg from 1721, a regular Russian postal service was a consequence of the Treaty of Kutchkuk Kainarji (1774). A consular P0 was opened in Istanbul (Pera; used handstamps from c.1830), a mail-boat plied between Istanbul and Kherson from 1779, and an overland mail route was opened in 1781 (Istanbul - Giurgiu - Bucharest -Focsani-Jassy-Bratzlav). This was suspended during various wars: 1787-92, 1806-12, 1828-9, and 1854-6. In 1856, after the Crimean War, the Russian service was entrusted to RUSSKOE OBSHCHESTVO PAROKHODSTVA i TORGOVLI (ROPiT; Russian Company of Trade and Navigation) with a PO at Istanbul (Galata) and PAs at every port-of-call. Handstamps were used from 1859 at Istanbul and from 1862 on ROPiT ships. There was direct transmission between POs; all external mail was routed via Odessa into the Imperial Russian PO. Numeral cancellers were allocated to ports in 1862: Batum, 777; Trebizond, 778; Mytilene, 779; Smyrna, 780; Merson, 781; Alexandretta, 782; Beirut, 783; Jaffa, 784; Alexandria, 785; Salonica, 787; and to many others in later periods. From May 1868 the ROPiT agencies were given the status of Russian POs abroad and surviving consular POs were closed.
Individually overprinted stamps were issued in 1909 for the following POs: Galata, Kerassunde, Trebizond, Rizeh, Dardanelles, Smyrna, Beirut, Jaffa, Jerusalem Mytilene, Salonica, Mount Athos.
All Russian POs on Turkish soil were closed on or before 30 September 1914 (though those in places ceded to Greece in 1913 may have remained open later). Though some ROPiT agencies re-opened briefly in 1919, the postal service failed for lack of ships in White Russian hands; stamps were sold to collectors rather than used for postage.
FIRST STAMPS late November 1920 (suppressed 31 May 1921).
Depreciated Wrangel roubles.
Post organized by General Wrangel to serve White Russian refugees (military and civilian) from the Crimea, lodged in camps mainly round Istanbul. There were in addition camps on Lemnos, in Belgrade, at Cattaro (Kotor in Yugoslavia) and Bizerta. (See also South Russia under Europe).
FIRST STAMPS ISSUED 5 August 1885.
1885, 25 centimes = 1 piastre.
French P0 was opened in Istanbul in 1812. Suspended in 1827-35 as a consequence of the Greek War of Independence. After the closures of 13 October 1914, only Istanbul reopened August 1921 - July 1923.
FIRST STAMPS British 1854.
FIRST STAMPS ISSUED 1 April 1885.
1885, 40 paras = 1 piastre.
British Embassy mail started in 1832. In November 1854 an Army P0 was established in Istanbul as a sorting and forwarding office for forces in the Crimea. The PO was opened for public service (oblit. 'C' in oval of bars) in September 1857; further POs were opened in Smyrna in 1872 (oblit. F87) and Beirut in 1873 (oblit. G 06). A second office was opened at Stamboul (oblit. S in oval bars) in 1884 but this was closed in the 1890s and did not reopen until 1908.
Because of speculation with Turkish currency, stamps overprinted in Turkish currency were issued on 1 April 1885. These were used concurrently with British adhesives and, later, stamps in British currency overprinted LEVANT. The latter were used for prepayment of parcels where the value of the contents was expressed in sterling.
An office was opened at Salonica in 1900 but only circular postmarks were used. All offices were closed on 30 September 1914, but the Smyrna office was reopened during 19 19-22 and used unoverprinted adhesives.
Istanbul had a British Army PO in 1918-20 and a civilian PO with overprinted stamps was open from 1920 to 1923.
German POs in Turkish Empire
FIRST STAMPS North German Confederation 1 March 1870; Germany 1872.
FIRST STAMPS ISSUED 1884.
Istanbul office was opened at Pera on 1 March 1870, but moved to Galata on 1 October 1877; a branch office was placed at Stamboul on 1 January 1876. Short-lived branches operated at Buyukdere 1880-4 and Therapia 1884-8. Later offices: Jaffa (1 October 1898), Jerusalem, Smyrna, Beirut, Pera (all 1 March 1900), were closed on 30 September 1914.
Italian POs in Turkish Empire
FIRST STAMPS Italy 1873.
FIRST STAMPS ISSUED 1908 (Istanbul).
CURRENCY1908, as Turkey.
Both Venice and Naples maintained postal connections with the Levant in the 18th century but these had lapsed before Unification. In 1873 Italian PAs were established in Istanbul, Smyrna, and Beirut. These were suppressed in 1883. From 1901 in Albania (q.v.) and 1908 elsewhere, Italian POs were opened (some by threat of force): Istanbul (Galata, Pera, Stainboul) 1 June 1908; Smyrna, Jerusalem, Salonica, and Valona.
Used stamps of Italian POs Abroad (ESTERO overprints) 1 January 1874-December 1883. Stamps of Italy were used in Istanbul because of shortages caused by collectors. Aegean Islands issues, see Greece.
First separately overprinted (CONSTANTINOPOLI) stamps February 1909.
First separately overprinted (GERUSALEMME) stamps February 1909.
First separately overprinted (SALONICCO) stamps February 1909.
Romanian POs in the Turkish Empire
FIRST STAMPS 16 March 1896.
FIRST STAMPS ISSUED 1908 (Istanbul).
CURRENCY As Turkey.
TPO was placed aboard a Romanian Steamship Company vessel to carry consular mails from Constantinople. This used special stamps. (See also Romania). In 1919 an attempt was made to restart the service. While the ship was moored at Constantinople, mails and stamps were seized by Turkish police on 25 May and the P0 closed.
Polish POs in Istanbul
FIRST STAMPS ISSUED 16 May 1919.
1919, as Turkey,100 fenigi = 1 marka.
Polish consulate opened a PO in May 1919, which closed in 1923.
Egyptian POs in Turkish Empire
POs were established at Istanbul (1866), and in 1870 at Beirut, Chios, Jaffa, Mersin, Mytilene, Salonica, Smyrna, Tripoli, Vol6s, Dardanelles and Gallipoli.
Used stamps of Egypt (distinguishable by cancellations).
Greek POs in Turkish Empire
Greek consular PAs were established at Constantinople (1834); Salonica and Dardanelles (1835); Bucharest, Ibraila, and Jassy (1857); Galatz and Larissa (January 1860); also at Vol6s, and at Candia, Canea and Rethymno in Crete. POs at Constantinople (1849) and at Smyrna (1857) were separated from the consulates and handled the mail of Greek citizens.
Used stamps of Greece 13 October 1861 - 25 April 1881 (cancellations bear the name of the town transliterated into Greek with TOYPKIA in brackets in the lower segment)
FIRST STAMPS ISSUED 4 March 1919
1919, as Turkey,100 fenigi = 1 marka.
An area between the Taurus Mountains and the Gulf of iskenderun (Alexandretta) corresponding roughly to the Turkish vilayet of Adana, occupied by French troops 1918 - 20 October 1921.
İLK PULLAR Suriye, 1919.
İLK BAĞIMSIZ PULLAR 16 Nisan 1938.
İskenderun Sancağı adıyla 4 Mart 1923 yılında Fransızlar tarafından özerklik verilen Hatay, Osmanlı'nın Suriye eyaletinin kuzey bölümünü oluşturuyordu. Fransız-Suriye ortak idaresine karşı girişilen bazı isyanların ardından 2 Eylül 1938 günü yapılan halk oylaması, özerk bir cumhuriyetle sonuçlandı. 30 Temmuz 1939 tarihinde ise Türkiye Cumhuriyeti topraklarına katıldı. 1918-38 arasında Suriye, 1939'dan sonra ise Türkiye pulları kullanıldı.
İLK PULLAR Ankara, 1920-1923.
İLK BAĞIMSIZ PULLAR 1923.
23 Nisan 1920 yılında Ankara'da BMM kuruluşundan 29 Ekim 1923 Türkiye Cumhuriyeti'nin kurulmasına kadar geçen dönem, Türk Pulculuk tarihinde Anadolu Dönemi olarak geçer. Bu dönemde Osmanlı döneminde kullanılan bazı pullar (ayrıntılar için Katolog bölümüne bakınız) sürşarj edilerek Ankara Hükümetince yazışmalarda kullanılmıştır. Cumhuriyetin ilanıyla birlikte Cumhuriyet pulları basılmış ve kullanılmaya başlanmıştır.Hatay
FIRST STAMPS Syria 1918.
FIRST SEPARATE STAMPS 16 April 1938.
1938, 100 centimes = 1 piastre.
1939, 100 santims = 40 paras = 1 kurus.
The northern part of the former Turkish province of Syria round Antioch, given autonomy by the French on 4 March 1923 as the Sanjak of Alexandretta. After some rioting against reincorporation into French Syria, an election on 2 September 1938 voted for an autonomous republic. This was incorporated into Turkey on 30 June 1939 (the province and chief town are now known as Antakya, and Alexandretta has been renamed Iskenderun).
Used stamps of Syria in 1918-38. Used stamps of Turkey from | 1 | 2 |
<urn:uuid:5bd65912-7109-4046-8382-966c93441740> | Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Speech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech synthesizer, and can be implemented in software or hardware. A text-to-speech (TTS) system converts normal language text into speech; other systems render symbolic linguistic representations like phonetic transcriptions into speech.
Synthesized speech can be created by concatenating pieces of recorded speech that are stored in a database. Systems differ in the size of the stored speech units; a system that stores phones or diphones provides the largest output range, but may lack clarity. For specific usage domains, the storage of entire words or sentences allows for high-quality output. Alternatively, a synthesizer can incorporate a model of the vocal tract and other human voice characteristics to create a completely "synthetic" voice output.
The quality of a speech synthesizer is judged by its similarity to the human voice, and by its ability to be understood. An intelligible text-to-speech program allows people with visual impairments or reading disabilities to listen to written works on a home computer. Many computer operating systems have included speech synthesizers since the early 1980s.
Overview of text processing Edit
A text-to-speech system (or "engine") is composed of two parts: a front-end and a back-end. The front-end has two major tasks. First, it converts raw text containing symbols like numbers and abbreviations into the equivalent of written-out words. This process is often called text normalization, pre-processing, or tokenization. The front-end then assigns phonetic transcriptions to each word, and divides and marks the text into prosodic units, like phrases, clauses, and sentences. The process of assigning phonetic transcriptions to words is called text-to-phoneme or grapheme-to-phoneme conversion. Phonetic transcriptions and prosody information together make up the symbolic linguistic representation that is output by the front-end. The back-end—often referred to as the synthesizer—then converts the symbolic linguistic representation into sound.
Long before electronic signal processing was invented, there were those who tried to build machines to create human speech. Some early legends of the existence of "speaking heads" involved Gerbert of Aurillac (d. 1003 AD), Albertus Magnus (1198–1280), and Roger Bacon (1214–1294).
In 1779, the Danish scientist Christian Kratzenstein, working at the Russian Academy of Sciences, built models of the human vocal tract that could produce the five long vowel sounds (in International Phonetic Alphabet notation, they are [aː], [eː], [iː], [oː] and [uː]). This was followed by the bellows-operated "acoustic-mechanical speech machine" by Wolfgang von Kempelen of Vienna, Austria, described in a 1791 paper. This machine added models of the tongue and lips, enabling it to produce consonants as well as vowels. In 1837, Charles Wheatstone produced a "speaking machine" based on von Kempelen's design, and in 1857, M. Faber built the "Euphonia". Wheatstone's design was resurrected in 1923 by Paget.
In the 1930s, Bell Labs developed the VOCODER, a keyboard-operated electronic speech analyzer and synthesizer that was said to be clearly intelligible. Homer Dudley refined this device into the VODER, which he exhibited at the 1939 New York World's Fair.
The Pattern playback was built by Dr. Franklin S. Cooper and his colleagues at Haskins Laboratories in the late 1940s and completed in 1950. There were several different versions of this hardware device but only one currently survives. The machine converts pictures of the acoustic patterns of speech in the form of a spectrogram back into sound. Using this device, Alvin Liberman and colleagues were able to discover acoustic cues for the perception of phonetic segments (consonants and vowels).
Early electronic speech synthesizers sounded robotic and were often barely intelligible. The quality of synthesized speech has steadily improved, but output from contemporary speech synthesis systems is still clearly distinguishable from actual human speech.
Electronic devices Edit
The first computer-based speech synthesis systems were created in the late 1950s, and the first complete text-to-speech system was completed in 1968. In 1961, physicist John Larry Kelly, Jr and colleague Louis Gerstman used an IBM 704 computer to synthesize speech, an event among the most prominent in the history of Bell Labs. Kelly's voice recorder synthesizer (vocoder) recreated the song "Daisy Bell", with musical accompaniment from Max Mathews. Coincidentally, Arthur C. Clarke was visiting his friend and colleague John Pierce at the Bell Labs Murray Hill facility. Clarke was so impressed by the demonstration that he used it in the climactic scene of his screenplay for his novel 2001: A Space Odyssey, where the HAL 9000 computer sings the same song as it is being put to sleep by astronaut Dave Bowman. Despite the success of purely electronic speech synthesis, research is still being conducted into mechanical speech synthesizers.
Synthesizer technologies Edit
The most important qualities of a speech synthesis system are naturalness and Intelligibility. Naturalness describes how closely the output sounds like human speech, while intelligibility is the ease with which the output is understood. The ideal speech synthesizer is both natural and intelligible. Speech synthesis systems usually try to maximize both characteristics.
The two primary technologies for generating synthetic speech waveforms are concatenative synthesis and formant synthesis. Each technology has strengths and weaknesses, and the intended uses of a synthesis system will typically determine which approach is used.
Concatenative synthesis Edit
Concatenative synthesis is based on the concatenation (or stringing together) of segments of recorded speech. Generally, concatenative synthesis produces the most natural-sounding synthesized speech. However, differences between natural variations in speech and the nature of the automated techniques for segmenting the waveforms sometimes result in audible glitches in the output. There are three main sub-types of concatenative synthesis.
Unit selection synthesis Edit
Unit selection synthesis uses large databases of recorded speech. During database creation, each recorded utterance is segmented into some or all of the following: individual phones, diphones, half-phones, syllables, morphemes, words, phrases, and sentences. Typically, the division into segments is done using a specially modified speech recognizer set to a "forced alignment" mode with some manual correction afterward, using visual representations such as the waveform and spectrogram. An index of the units in the speech database is then created based on the segmentation and acoustic parameters like the fundamental frequency (pitch), duration, position in the syllable, and neighboring phones. At runtime, the desired target utterance is created by determining the best chain of candidate units from the database (unit selection). This process is typically achieved using a specially weighted decision tree.
Unit selection provides the greatest naturalness, because it applies only a small amount of digital signal processing (DSP) to the recorded speech. DSP often makes recorded speech sound less natural, although some systems use a small amount of signal processing at the point of concatenation to smooth the waveform. The output from the best unit-selection systems is often indistinguishable from real human voices, especially in contexts for which the TTS system has been tuned. However, maximum naturalness typically require unit-selection speech databases to be very large, in some systems ranging into the gigabytes of recorded data, representing dozens of hours of speech. Also, unit selection algorithms have been known to select segments from a place that results in less than ideal synthesis (e.g. minor words become unclear) even when a better choice exists in the database.
Diphone synthesis Edit
Diphone synthesis uses a minimal speech database containing all the diphones (sound-to-sound transitions) occurring in a language. The number of diphones depends on the phonotactics of the language: for example, Spanish has about 800 diphones, and German about 2500. In diphone synthesis, only one example of each diphone is contained in the speech database. At runtime, the target prosody of a sentence is superimposed on these minimal units by means of digital signal processing techniques such as linear predictive coding, PSOLA or MBROLA. The quality of the resulting speech is generally worse than that of unit-selection systems, but more natural-sounding than the output of formant synthesizers. Diphone synthesis suffers from the sonic glitches of concatenative synthesis and the robotic-sounding nature of formant synthesis, and has few of the advantages of either approach other than small size. As such, its use in commercial applications is declining, although it continues to be used in research because there are a number of freely available software implementations.
Domain-specific synthesis Edit
Domain-specific synthesis concatenates prerecorded words and phrases to create complete utterances. It is used in applications where the variety of texts the system will output is limited to a particular domain, like transit schedule announcements or weather reports. The technology is very simple to implement, and has been in commercial use for a long time, in devices like talking clocks and calculators. The level of naturalness of these systems can be very high because the variety of sentence types is limited, and they closely match the prosody and intonation of the original recordings.[How to reference and link to summary or text]
Because these systems are limited by the words and phrases in their databases, they are not general-purpose and can only synthesize the combinations of words and phrases with which they have been preprogrammed. The blending of words within naturally spoken language however can still cause problems unless the many variations are taken into account. For example, in non-rhotic dialects of English the "r" in words like "clear" /ˈkliːə/ is usually only pronounced when the following word has a vowel as its first letter (e.g. "clear out" is realized as /ˌkliːəɹˈɑʊt/). Likewise in French, many final consonants become no longer silent if followed by a word that begins with a vowel, an effect called liaison. This alternation cannot be reproduced by a simple word-concatenation system, which would require additional complexity to be context-sensitive.
Formant synthesis Edit
Formant synthesis does not use human speech samples at runtime. Instead, the synthesized speech output is created using an acoustic model. Parameters such as fundamental frequency, voicing, and noise levels are varied over time to create a waveform of artificial speech. This method is sometimes called rules-based synthesis; however, many concatenative systems also have rules-based components.
Many systems based on formant synthesis technology generate artificial, robotic-sounding speech that would never be mistaken for human speech. However, maximum naturalness is not always the goal of a speech synthesis system, and formant synthesis systems have advantages over concatenative systems. Formant-synthesized speech can be reliably intelligible, even at very high speeds, avoiding the acoustic glitches that commonly plague concatenative systems. High-speed synthesized speech is used by the visually impaired to quickly navigate computers using a screen reader. Formant synthesizers are usually smaller programs than concatenative systems because they do not have a database of speech samples. They can therefore be used in embedded systems, where memory and microprocessor power are especially limited. Because formant-based systems have complete control of all aspects of the output speech, a wide variety of prosodies and intonations can be output, conveying not just questions and statements, but a variety of emotions and tones of voice.
Examples of non-real-time but highly accurate intonation control in formant synthesis include the work done in the late 1970s for the Texas Instruments toy Speak & Spell, and in the early 1980s Sega arcade machines. Creating proper intonation for these projects was painstaking, and the results have yet to be matched by real-time text-to-speech interfaces.
Articulatory synthesis Edit
Articulatory synthesis refers to computational techniques for synthesizing speech based on models of the human vocal tract and the articulation processes occurring there. The first articulatory synthesizer regularly used for laboratory experiments was developed at Haskins Laboratories in the mid-1970s by Philip Rubin, Tom Baer, and Paul Mermelstein. This synthesizer, known as ASY, was based on vocal tract models developed at Bell Laboratories in the 1960s and 1970s by Paul Mermelstein, Cecil Coker, and colleagues.
Until recently, articulatory synthesis models have not been incorporated into commercial speech synthesis systems. A notable exception is the NeXT-based system originally developed and marketed by Trillium Sound Research, a spin-off company of the University of Calgary, where much of the original research was conducted. Following the demise of the various incarnations of NeXT (started by Steve Jobs in the late 1980s and merged with Apple Computer in 1997), the Trillium software was published under the GNU General Public License, with work continuing as gnuspeech. The system, first marketed in 1994, provides full articulatory-based text-to-speech conversion using a waveguide or transmission-line analog of the human oral and nasal tracts controlled by Carré's "distinctive region model".
HMM-based synthesis is a synthesis method based on hidden Markov models. In this system, the frequency spectrum (vocal tract), fundamental frequency (vocal source), and duration (prosody) of speech are modeled simultaneously by HMMs. Speech waveforms are generated from HMMs themselves based on the maximum likelihood criterion.
Sinewave synthesis Edit
Text normalization challenges Edit
The process of normalizing text is rarely straightforward. Texts are full of heteronyms, numbers, and abbreviations that all require expansion into a phonetic representation. There are many spellings in English which are pronounced differently based on context. For example, "My latest project is to learn how to better project my voice" contains two pronunciations of "project".
Most text-to-speech (TTS) systems do not generate semantic representations of their input texts, as processes for doing so are not reliable, well understood, or computationally effective. As a result, various heuristic techniques are used to guess the proper way to disambiguate homographs, like examining neighboring words and using statistics about frequency of occurrence.
Recently TTS systems have begun to use HMMs (discussed above) to generate "parts of speech" to aid in disambiguating homographs. This technique is quite successful for many cases such as whether "read" should be pronounced as "red" implying past tense, or as "reed" implying present tense. Typical error rates when using HMMs in this fashion are usually below five percent. These techniques also work well for most European languages, although access to required training corpora is frequently difficult in these languages.
Deciding how to convert numbers is another problem that TTS systems have to address. It is a simple programming challenge to convert a number into words, like "1325" becoming "one thousand three hundred twenty-five." However, numbers occur in many different contexts; when a year or perhaps a part of an address, "1325" should likely be read as "thirteen twenty-five", or, when part of a social security number, as "one three two five". A TTS system can often infer how to expand a number based on surrounding words, numbers, and punctuation, and sometimes the system provides a way to specify the context if it is ambiguous.[How to reference and link to summary or text] Roman numerals can also be read differently depending on context. For example "Henry VIII" reads as "Henry the Eighth", while "Chapter VIII" reads as "Chapter Eight".
Similarly, abbreviations can be ambiguous. For example, the abbreviation "in" for "inches" must be differentiated from the word "in", and the address "12 St John St." uses the same abbreviation for both "Saint" and "Street". TTS systems with intelligent front ends can make educated guesses about ambiguous abbreviations, while others provide the same result in all cases, resulting in nonsensical (and sometimes comical) outputs.
Text-to-phoneme challenges Edit
Speech synthesis systems use two basic approaches to determine the pronunciation of a word based on its spelling, a process which is often called text-to-phoneme or grapheme-to-phoneme conversion (phoneme is the term used by linguists to describe distinctive sounds in a language). The simplest approach to text-to-phoneme conversion is the dictionary-based approach, where a large dictionary containing all the words of a language and their correct pronunciations is stored by the program. Determining the correct pronunciation of each word is a matter of looking up each word in the dictionary and replacing the spelling with the pronunciation specified in the dictionary. The other approach is rule-based, in which pronunciation rules are applied to words to determine their pronunciations based on their spellings. This is similar to the "sounding out", or synthetic phonics, approach to learning reading.
Each approach has advantages and drawbacks. The dictionary-based approach is quick and accurate, but completely fails if it is given a word which is not in its dictionary.[How to reference and link to summary or text] As dictionary size grows, so too does the memory space requirements of the synthesis system. On the other hand, the rule-based approach works on any input, but the complexity of the rules grows substantially as the system takes into account irregular spellings or pronunciations. (Consider that the word "of" is very common in English, yet is the only word in which the letter "f" is pronounced [v].) As a result, nearly all speech synthesis systems use a combination of these approaches.
Some languages, like Spanish, have a very regular writing system, and the prediction of the pronunciation of words based on their spellings is quite successful.[How to reference and link to summary or text] Speech synthesis systems for such languages often use the rule-based method extensively, resorting to dictionaries only for those few words, like foreign names and borrowings, whose pronunciations are not obvious from their spellings. On the other hand, speech synthesis systems for languages like English, which have extremely irregular spelling systems, are more likely to rely on dictionaries, and to use rule-based methods only for unusual words, or words that aren't in their dictionaries.
Evaluation challenges Edit
The consistent evaluation of speech synthesis systems may be difficult because of a lack of universally agreed objective evaluation criteria. Different organizations often use different speech data. The quality of speech synthesis systems also depends to a large degree on the quality of the production technique (which may involve analogue or digital recording) and on the facilities used to replay the speech. Evaluating speech synthesis systems has therefore often been compromised by differences between production techniques and replay facilities.
Recently, however, some researchers have started to evaluate speech synthesis systems using a common speech dataset..
Prosodics and emotional content Edit
A recent study reported in the journal "Speech Communication" by Amy Drahota and colleagues at the University of Portsmouth, UK, reported that listeners to voice recordings could determine, at better than chance levels, whether or not the speaker was smiling. It was suggested that identification of the vocal features which signal emotional content may be used to help make synthesized speech sound more natural.
Dedicated hardware Edit
- SC-01A (analog formant)
- SC-02 / SSI-263 / "Arctic 263"
- General Instruments SP0256-AL2 (CTS256A-AL2, MEA8000)
- Magnevation SpeakJet (www.speechchips.com TTS256)
- Savage Innovations SoundGin
- National Semiconductor DT1050 Digitalker (Mozer)
- Silicon Systems SSI 263 (analog formant)
- Texas Instruments
- TMS5110A (LPC)
- Oki Semiconductor
- MSM5218RS (ADPCM)
- Toshiba T6721A
- Philips PCF8200
Computer operating systems or outlets with speech synthesis Edit
The first speech system integrated into an operating system was Apple Computer's MacInTalk in 1984. Since the 1980s Macintosh Computers offered text to speech capabilities through The MacinTalk software. In the early 1990s Apple expanded its capabilities offering system wide text-to-speech support. With the introduction of faster PowerPC-based computers they included higher quality voice sampling. Apple also introduced speech recognition into its systems which provided a fluid command set. More recently, Apple has added sample-based voices. Starting as a curiosity, the speech system of Apple Macintosh has evolved into a cutting edge fully-supported program, PlainTalk, for people with vision problems. VoiceOver was included in Mac OS X Tiger and more recently Mac OS X Leopard. The voice shipping with Mac OS X 10.5 ("Leopard") is called "Alex" and features the taking of realistic-sounding breaths between sentences, as well as improved clarity at high read rates.
The second operating system with advanced speech synthesis capabilities was AmigaOS, introduced in 1985. The voice synthesis was licensed by Commodore International from a third-party software house (Don't Ask Software, now Softvoice, Inc.) and it featured a complete system of voice emulation, with both male and female voices and "stress" indicator markers, made possible by advanced features of the Amiga hardware audio chipset. It was divided into a narrator device and a translator library. Amiga Speak Handler featured a text-to-speech translator. AmigaOS considered speech synthesis a virtual hardware device, so the user could even redirect console output to it. Some Amiga programs, such as word processors, made extensive use of the speech system.
Microsoft Windows Edit
Modern Windows systems use SAPI4- and SAPI5-based speech systems that include a speech recognition engine (SRE). SAPI 4.0 was available on Microsoft-based operating systems as a third-party add-on for systems like Windows 95 and Windows 98. Windows 2000 added a speech synthesis program called Narrator, directly available to users. All Windows-compatible programs could make use of speech synthesis features, available through menus once installed on the system. Microsoft Speech Server is a complete package for voice synthesis and recognition, for commercial applications such as call centers.
Currently, there are a number of applications, plugins and gadgets that can read messages directly from an e-mail client and web pages from a web browser. Some specialized software can narrate RSS-feeds. On one hand, online RSS-narrators simplify information delivery by allowing users to listen to their favourite news sources and to convert them to podcasts. On the other hand, on-line RSS-readers are available on almost any PC connected to the Internet. Users can download generated audio files to portable devices, e.g. with a help of podcast receiver, and listen to them while walking, jogging or commuting to work.
A growing field in internet based TTS is web-based assistive technology, e.g. 'Browsealoud' from a UK company. It can deliver TTS functionality to anyone (for reasons of accessibility, convenience, entertainment or information) with access to a web browser.
- Some models of Texas Instruments home computers produced in 1979 and 1981 (Texas Instruments TI-99/4 and TI-99/4A) were capable of text-to-phoneme synthesis or reciting complete words and phrases (text-to-dictionary), using a very popular Speech Synthesizer peripheral. TI used a proprietary codec to embed complete spoken phrases into applications, primarily video games.
- IBM's OS/2 Warp 4 included VoiceType, a precursor to IBM ViaVoice.
- Systems that operate on free and open source software systems including GNU/Linux are various, and include open-source programs such as the Festival Speech Synthesis System which uses diphone-based synthesis (and can use a limited number of MBROLA voices), and gnuspeech which uses articulatory synthesis from the Free Software Foundation. Other commercial vendor software also runs on GNU/Linux.
- Several commercial companies are also developing speech synthesis systems (this list is reporting them just for the sake of information, not endorsing any specific product): Voice on the Go, Acapela Group, AT&T, Cepstral, CereProc, DECtalk, IBM ViaVoice, IVONA TTS, Loquendo TTS, NeoSpeech TTS, Nuance Communications, Orpheus, SVOX, YAKiToMe! and Voxette.
- Companies which developed speech synthesis systems but which are no longer in this business include BeST Speech (bought by L&H), Eloquent Technology (bought by SpeechWorks), Lernout & Hauspie (bought by Nuance), SpeechWorks (bought by Nuance), Rhetorical Systems (bought by Nuance).
Speech synthesis markup languages Edit
A number of markup languages have been established for the rendition of text as speech in an XML-compliant format. The most recent is Speech Synthesis Markup Language (SSML), which became a W3C recommendation in 2004. Older speech synthesis markup languages include Java Speech Markup Language (JSML) and SABLE. Although each of these was proposed as a standard, none of them has been widely adopted.
Speech synthesis markup languages are distinguished from dialogue markup languages. VoiceXML, for example, includes tags related to speech recognition, dialogue management and touchtone dialing, in addition to text-to-speech markup.
Speech synthesis has long been a vital assistive technology tool and its application in this area is significant and widespread. It allows environmental barriers to be removed for people with a wide range of disabilities. The longest application has been in the use of screenreaders for people with visual impairment, but text-to-speech systems are now commonly used by people with dyslexia and other reading difficulties as well as by pre-literate youngsters. They are also frequently employed to aid those with severe speech impairment usually through a dedicated voice output communication aid.
Sites such as Ananova have used speech synthesis to convert written news to audio content, which can be used for mobile applications.
Speech synthesis techniques are used as well in the entertainment productions such as games, anime and similar. In 2007, Animo Limited announced the development of a software application package based on its speech synthesis software FineSpeech, explicitly geared towards customers in the entertainment industries, able to generate narration and lines of dialogue according to user specifications. The application reached maturity in 2008, when NEC Biglobe announced a web service that allows users to create phrases from the voices of Code Geass: Lelouch of the Rebellion R2 characters.
The TTS application Speakonia is often used to add synthetic voices to YouTube videos for comedic effect, as in "Secret Missing Episode" videos.
Software such as Vocaloid can generate singing voices via lyrics and melody. This is also the aim of the Singing Computer project (which uses the GPL software Lilypond and Festival) to help blind people check their lyric input.
See also Edit
- ↑ Jonathan Allen, M. Sharon Hunnicutt, Dennis Klatt, From Text to Speech: The MITalk system. Cambridge University Press: 1987. ISBN 0521306418
- ↑ Rubin, P., Baer, T., & Mermelstein, P. (1981). An articulatory synthesizer for perceptual research. Journal of the Acoustical Society of America, 70, 321-328.
- ↑ P. H. Van Santen, Richard William Sproat, Joseph P. Olive, and Julia Hirschberg, Progress in Speech Synthesis. Springer: 1997. ISBN 0387947019
- ↑ History and Development of Speech Synthesis, Helsinki University of Technology, Retrieved on November 4, 2006
- ↑ Mechanismus der menschlichen Sprache nebst der Beschreibung seiner sprechenden Maschine ("Mechanism of the human speech with description of its speaking machine," J. B. Degen, Wien).
- ↑ Mattingly, Ignatius G. Speech synthesis for phonetic and phonological models. In Thomas A. Sebeok (Ed.), Current Trends in Linguistics, Volume 12, Mouton, The Hague, pp. 2451-2487, 1974.
- ↑ http://query.nytimes.com/search/query?ppds=per&v1=GERSTMAN%2C%20LOUIS&sort=newest NY Times obituary for Louis Gerstman.
- ↑ Arthur C. Clarke online Biography
- ↑ Bell Labs: Where "HAL" First Spoke (Bell Labs Speech Synthesis website)
- ↑ Anthropomorphic Talking Robot Waseda-Talker Series
- ↑ Alan W. Black, Perfect synthesis for all of the people all of the time. IEEE TTS Workshop 2002. (http://www.cs.cmu.edu/~awb/papers/IEEE2002/allthetime/allthetime.html)
- ↑ John Kominek and Alan W. Black. (2003). CMU ARCTIC databases for speech synthesis. CMU-LTI-03-177. Language Technologies Institute, School of Computer Science, Carnegie Mellon University.
- ↑ Julia Zhang. Language Generation and Speech Synthesis in Dialogues for Language Learning, masters thesis, http://groups.csail.mit.edu/sls/publications/2004/zhang_thesis.pdf Section 5.6 on page 54.
- ↑ PSOLA Synthesis
- ↑ T. Dutoit, V. Pagel, N. Pierret, F. Bataiile, O. van der Vrecken. The MBROLA Project: Towards a set of high quality speech synthesizers of use for non commercial purposes. ICSLP Proceedings, 1996.
- ↑ L.F. Lamel, J.L. Gauvain, B. Prouts, C. Bouhier, R. Boesch. Generation and Synthesis of Broadcast Messages, Proceedings ESCA-NATO Workshop and Applications of Speech Technology, September 1993.
- ↑ Examples include Astro Blaster, Space Fury, and Star Trek: Strategic Operations Simulator.
- ↑ John Holmes and Wendy Holmes. Speech Synthesis and Recognition, 2nd Edition. CRC: 2001. ISBN 0748408568.
- ↑ The HMM-based Speech Synthesis System, http://hts.sp.nitech.ac.jp/
- ↑ Remez, R.E., Rubin, P.E., Pisoni, D.B., & Carrell, T.D. Speech perception without traditional speech cues. Science, 1981, 212, 947-950.
- ↑ Blizzard Challenge http://festvox.org/blizzard
- ↑ The Sound of Smiling
- ↑ Miner, Jay et al. (1991). Amiga Hardware Reference Manual: Third Edition. Addison-Wesley Publishing Company, Inc. ISBN 0-201-56776-8.
- ↑ Smithsonian Speech Synthesis History Project (SSSHP) 1986-2002
- ↑ | gnuspeech
- ↑ Speech Synthesis Software for Anime Announced
- ↑ Code Geass Speech Synthesizer Service Offered in Japan
- ↑ Free(b)soft - Singing Computer
de:Sprachsynthese es:Síntesis de habla eo:Parolsintezo eu:Hizketaren sintesia fa:متن به صدا fr:Synthèse vocale ko:음성 합성 hi:वाक् संश्लेषणhu:Beszédszintézis ms:Lafal buatan nl:Spraaksynthesenn:Talesyntesept:Síntese de voz ru:Синтез речи sr:Sinteza govora fi:Puhesynteesi sv:Talsyntes ta:பேச்சொலியாக்கம்uk:Синтез мови ur:تالیف کلام vi:Tổng hợp giọng nói zh:语音合成 -->
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| | 1 | 13 |
<urn:uuid:a6f5d75b-535c-4353-abfc-766dff14593a> | ||This article needs additional citations for verification. (November 2009)|
In computer science, reference counting is a technique of storing the number of references, pointers, or handles to a resource such as an object, block of memory, disk space or other resource. It may also refer, more specifically, to a garbage collection algorithm that uses these reference counts to deallocate objects which are no longer referenced.
Use in garbage collection
As a garbage collection algorithm, reference counting tracks, for each object, a count of the number of references to it held by other objects. If an object's reference count reaches zero, the object has become inaccessible, and can be destroyed.
When an object is destroyed, any objects referenced by that object also have their reference counts decreased. Because of this, removing a single reference can potentially lead to a large number of objects being freed. A common modification allows reference counting to be made incremental: instead of destroying an object as soon as its reference count becomes zero, it is added to a list of unreferenced objects, and periodically (or as needed) one or more items from this list are destroyed.
Simple reference counts require frequent updates. Whenever a reference is destroyed or overwritten, the reference count of the object it references is decremented, and whenever one is created or copied, the reference count of the object it references is incremented.
Reference counting is also used in disk operating systems and distributed systems, where full non-incremental tracing garbage collection is too time consuming because of the size of the object graph and slow access speed.
Advantages and disadvantages
The main advantage of reference counting over tracing garbage collection is that objects are reclaimed as soon as they can no longer be referenced, and in an incremental fashion, without long pauses for collection cycles and with clearly defined lifetime of every object. In real-time applications or systems with limited memory, this is important to maintain responsiveness. Reference counting is also among the simplest forms of garbage collection to implement. It also allows for effective management of non-memory resources such as operating system objects, which are often much scarcer than memory (tracing GC systems use finalizers for this, but the delayed reclamation may cause problems). Weighted reference counts are a good solution for garbage collecting a distributed system.
Tracing garbage collection cycles are triggered too often if the set of live objects fills most of the available memory; it requires extra space to be efficient. Reference counting performance does not deteriorate as the total amount of free space decreases.
Reference counts are also useful information to use as input to other runtime optimizations. For example, systems that depend heavily on immutable objects such as many functional programming languages can suffer an efficiency penalty due to frequent copies. However, if we know an object has only one reference (as most do in many systems), and that reference is lost at the same time that a similar new object is created (as in the string append statement
str ← str + "a"), we can replace the operation with a mutation on the original object.
Reference counting in naive form has two main disadvantages over the tracing garbage collection, both of which require additional mechanisms to ameliorate:
- The frequent updates it involves are a source of inefficiency. While tracing garbage collectors can impact efficiency severely via context switching and cache line faults, they collect relatively infrequently, while accessing objects is done continually. Also, less importantly, reference counting requires every memory-managed object to reserve space for a reference count. In tracing garbage collectors, this information is stored implicitly in the references that refer to that object, saving space, although tracing garbage collectors, particularly incremental ones, can require additional space for other purposes.
- The naive algorithm described above can't handle reference cycles, an object which refers directly or indirectly to itself. A mechanism relying purely on reference counts will never consider cyclic chains of objects for deletion, since their reference count is guaranteed to stay nonzero. Methods for dealing with this issue exist but can also increase the overhead and complexity of reference counting — on the other hand, these methods need only be applied to data that might form cycles, often a small subset of all data. One such method is the use of weak references.
When dealing with garbage collection schemes, it's often helpful to think of the reference graph, which is a directed graph where the vertices are objects and there is an edge from an object A to an object B if A holds a reference to B. We also have a special vertex or vertices representing the local variables and references held by the runtime system, and no edges ever go to these nodes, although edges can go from them to other nodes.
In this context, the simple reference count of an object is the in-degree of its vertex. Deleting a vertex is like collecting an object. It can only be done when the vertex has no incoming edges, so it does not affect the out-degree of any other vertices, but it can affect the in-degree of other vertices, causing their corresponding objects to be collected as well if their in-degree also becomes 0 as a result.
The connected component containing the special vertex contains the objects that can't be collected, while other connected components of the graph only contain garbage. By the nature of reference counting, each of these garbage components must contain at least one cycle.
Dealing with inefficiency of updates
Incrementing and decrementing reference counts every time a reference is created or destroyed can significantly impede performance. Not only do the operations take time, but they damage cache performance and can lead to pipeline bubbles. Even read-only operations like calculating the length of a list require a large number of reads and writes for reference updates with naive reference counting.
One simple technique is for the compiler to combine a number of nearby reference updates into one. This is especially effective for references which are created and quickly destroyed. Care must be taken, however, to put the combined update at the right position so that a premature free be avoided.
The Deutsch-Bobrow method of reference counting capitalizes on the fact that most reference count updates are in fact generated by references stored in local variables. It ignores these references, only counting references in data structures, but before an object with reference count zero can be deleted, the system must verify with a scan of the stack and registers that no other reference to it still exists.
Another technique devised by Henry Baker involves deferred increments, in which references which are stored in local variables do not immediately increment the corresponding reference count, but instead defer this until it is necessary. If such a reference is destroyed quickly, then there is no need to update the counter. This eliminates a large number of updates associated with short-lived references. However, if such a reference is copied into a data structure, then the deferred increment must be performed at that time. It is also critical to perform the deferred increment before the object's count drops to zero, resulting in a premature free.
A dramatic decrease in the overhead on counter updates was obtained by Levanoni and Petrank. They introduce the update coalescing method which coalesces many of the redundant reference count updates. Consider a pointer that in a given interval of the execution is updated several times. It first points to an object O1, then to an object O2, and so forth until at the end of the interval it points to some object On. A reference counting algorithm would typically execute rc(O1)--, rc(O2)++, rc(O2)--, rc(O3)++, rc(O3)--, ..., rc(On)++. But most of these updates are redundant. In order to have the reference count properly evaluated at the end of the interval it is enough to perform rc(O1)-- and rc(On)++. The rest of the updates are redundant.
Levanoni and Petrank show how to use such update coalescing in a reference counting collector. It turns out that when using update coalescing with an appropriate treatment of new objects, more than 99% of the counter updates are eliminated for typical Java benchmarks. In addition, the need for atomic operations during pointer updates on parallel processors is eliminated. Finally, they present an enhanced algorithm that may run concurrently with multithreaded applications employing only fine synchronization. The details appear in the paper.
Blackburn and McKinley's ulterior reference counting combines deferred reference counting with a copying nursery, observing that the majority of pointer mutations occur in young objects. This algorithm achieves throughput comparable with the fastest generational copying collectors with the low bounded pause times of reference counting.
More work on improving performance of reference counting collectors[clarification needed] can be found in Paz's Ph.D thesis. In particular, he advocates the use of age oriented collectors and prefetching.
Dealing with reference cycles
There are a variety of ways of handling the problem of detecting and collecting reference cycles. One is that a system may explicitly forbid reference cycles. In some systems like filesystems this is a common solution. Another example is the Cocoa framework, which recommends avoiding reference cycles by using "strong" (counted) references for "parent-to-child" references, and "weak" (non-counted) references for "child-to-parent" references. Cycles are also sometimes ignored in systems with short lives and a small amount of cyclic garbage, particularly when the system was developed using a methodology of avoiding cyclic data structures wherever possible, typically at the expense of efficiency.
Another solution is to periodically use a tracing garbage collector to reclaim cycles. Since cycles typically constitute a relatively small amount of reclaimed space, the collection cycles can be spaced much farther apart than with an ordinary tracing garbage collector.
Bacon describes a cycle-collection algorithm for reference counting systems with some similarities to tracing systems, including the same theoretical time bounds, but that takes advantage of reference count information to run much more quickly and with less cache damage. It's based on the observation that an object cannot appear in a cycle until its reference count is decremented to a nonzero value. All objects which this occurs to are put on a roots list, and then periodically the program searches through the objects reachable from the roots for cycles. It knows it has found a cycle when decrementing all the reference counts on a cycle of references brings them all down to zero. An enhanced version of this algorithm by Paz et al. is able to run concurrently with other operations and improve its efficiency by using the update coalescing method of Levanoni and Petrank.
Variants of reference counting
Although it's possible to augment simple reference counts in a variety of ways, often a better solution can be found by performing reference counting in a fundamentally different way. Here we describe some of the variants on reference counting and their benefits and drawbacks.
Weighted reference counting
In weighted reference counting, we assign each reference a weight, and each object tracks not the number of references referring to it, but the total weight of the references referring to it. The initial reference to a newly-created object has a large weight, such as 216. Whenever this reference is copied, half of the weight goes to the new reference, and half of the weight stays with the old reference. Because the total weight does not change, the object's reference count does not need to be updated.
Destroying a reference decrements the total weight by the weight of that reference. When the total weight becomes zero, all references have been destroyed. If an attempt is made to copy a reference with a weight of 1, we have to "get more weight" by adding to the total weight and then adding this new weight to our reference, and then split it.
The property of not needing to access a reference count when a reference is copied is particularly helpful when the object's reference count is expensive to access, for example because it is in another process, on disk, or even across a network. It can also help increase concurrency by avoiding many threads locking a reference count to increase it. Thus, weighted reference counting is most useful in parallel, multiprocess, database, or distributed applications.
The primary problem with simple weighted reference counting is that destroying a reference still requires accessing the reference count, and if many references are destroyed this can cause the same bottlenecks we seek to avoid. Some adaptations of weighted reference counting seek to avoid this by attempting to give weight back from a dying reference to one which is still active.
Weighted reference counting was independently devised by Bevan, in the paper Distributed garbage collection using reference counting, and Watson, in the paper An efficient garbage collection scheme for parallel computer architectures, both in 1987.
Indirect reference counting
In indirect reference counting, it is necessary to keep track of whom the reference was obtained from. This means that two references are kept to the object: a direct one which is used for invocations; and an indirect one which forms part of a diffusion tree, such as in the Dijkstra-Scholten algorithm, which allows a garbage collector to identify dead objects. This approach prevents an object from being discarded prematurely.
Examples of use
Microsoft's Component Object Model (COM) makes pervasive use of reference counting. In fact, the three methods that all COM objects must provide (in the IUnknown interface) all increment or decrement the reference count. Much of the Windows Shell and many Windows applications (including MS Internet Explorer, MS Office, and countless third-party products) are built on COM, demonstrating the viability of reference counting in large-scale systems.
One primary motivation for reference counting in COM is to enable interoperability across different programming languages and runtime systems. A client need only know how to invoke object methods in order to manage object life cycle; thus, the client is completely abstracted from whatever memory allocator the implementation of the COM object uses. As a typical example, a Visual Basic program using a COM object is agnostic towards whether that object was allocated (and must later be deallocated) by a C++ allocator or another Visual Basic component.
However, this support for heterogeneity has a major cost: it requires correct reference count management by all parties involved. While high-level languages like Visual Basic manage reference counts automatically, C/C++ programmers are entrusted to increment and decrement reference counts at the appropriate time. C++ programs can and should avoid the task of managing reference counts manually by using smart pointers. Bugs caused by incorrect reference counting in COM systems are notoriously hard to resolve, especially because the error may occur in an opaque, third-party component.
Microsoft has abandoned reference counting in favor of tracing garbage collection for the .NET Framework.
C++11 provides a reference counted smart pointers, via the
std::shared_ptr class. Programmers can use weak pointers (via
std::weak_ptr) to break cycles. C++ does not require all objects to be reference counted; in fact, programmers can chose to apply reference counting to only those objects that are truly shared; objects not intended to be shared can be referenced using a
std::unique_ptr, and objects that are shared but not owned can be accessed via an iterator.
In addition, C++11's move semantics further reduce the extent to which reference counts need to be modified.
Apple's Cocoa framework (and related frameworks, such as Core Foundation) use manual reference counting, much like COM. However, as of Mac OS X v10.5, Cocoa when used with Objective-C 2.0 also has automatic garbage collection. Apple's Cocoa Touch framework, used on its iOS devices, also uses manual reference counting, and does not support automatic garbage collection, though Automatic Reference Counting was added in iOS 5 and Mac OS X 10.7. As of OS X 10.8, garbage collection has been deprecated in favour of automatic/manual reference counting.
One language that uses reference counting for garbage collection is Delphi. Delphi is not a completely garbage collected language, in that user-defined types must still be manually allocated and deallocated. It does provide automatic collection, however, for a few built-in types, such as strings, dynamic arrays, and interfaces, for ease of use and to simplify the generic database functionality. It is up to the programmer to decide whether to use the built-in types or not; Delphi programmers have complete access to low-level memory management like in C/C++. So all potential cost of Delphi's reference counting can, if desired, be easily circumvented.
Some of the reasons reference counting may have been preferred to other forms of garbage collection in Delphi include:
- The general benefits of reference counting, such as prompt collection.
- Cycles either cannot occur or do not occur in practice because all of the small set of garbage-collected built-in types are not arbitrarily nestable.
- The overhead in code size required for reference counting is very small (on native x86, typically a single LOCK INC, LOCK DEC or LOCK XADD instruction, which ensures atomicity in any environment), and no separate thread of control is needed for collection as would be needed for a tracing garbage collector.
- Many instances of the most commonly used garbage-collected type, the string, have a short lifetime, since they are typically intermediate values in string manipulation.
- The reference count of a string is checked before mutating a string. This allows reference count 1 strings to be mutated directly whilst higher reference count strings are copied before mutation. This allows the general behaviour of old style pascal strings to be preserved whilst eliminating the cost of copying the string on every assignment.
- Because garbage-collection is only done on built-in types, reference counting can be efficiently integrated into the library routines used to manipulate each datatype, keeping the overhead needed for updating of reference counts low. Moreover a lot of the runtime library is in handoptimized assembler.
The GObject object-oriented programming framework implements reference counting on its base types, including weak references. Reference incrementing and decrementing uses atomic operations for thread safety. A significant amount of the work in writing bindings to GObject from high-level languages lies in adapting GObject reference counting to work with the language's own memory management system.
PHP uses a reference counting mechanism for its internal variable management. Since PHP 5.3, it implements the algorithm from Bacon's above mentioned paper. PHP allows you to turn on and off the cycle collection with user-level functions. It also allows you to manually force the purging mechanism to be run.
Perl also uses reference counting, without any special handling of circular references, although (as in Cocoa and C++ above), Perl does support weak references, which allows programmers to avoid creating a cycle.
Squirrel also uses reference counting and offers cycle detection as well. This tiny language is relatively unknown outside the video game industry; however, it is a concrete example of how reference counting can be practical and efficient (especially in realtime environments).
Tcl 8 uses reference counting for memory management of values (Tcl Obj structs). Since Tcl's values are immutable, reference cycles are impossible to form and no cycle detection scheme is needed. Operations that would replace a value with a modified copy are generally optimized to instead modify the original when its reference count indicates it to be unshared. The references are counted at a data structure level, so the problems with very frequent updates discussed above do not arise.
Disk operating systems
Many disk operating systems maintain a count of the number of references to any particular block or file. When the count falls to zero, the file can be safely deallocated. In addition, while references can still be made from directories, some Unixes allow that the referencing can be solely made by live processes, and there can be files that do not exist in the file system hierarchy.
- Wilson, Paul R. "Uniprocessor Garbage Collection Techniques". Proceedings of the International Workshop on Memory Management. London, UK: Springer-Verlag. pp. 1–42. ISBN 3-540-55940-X. Retrieved 5 December 2009. Section 2.1.
- Henry Baker (September 1994). "Minimizing Reference Count Updating with Deferred and Anchored Pointers for Functional Data Structures". ACM SIGPLAN Notices 29 (9): 38–43. doi:10.1145/185009.185016.
- Yossi Levanoni, Erez Petrank (2001). "An on-the-fly reference-counting garbage collector for java". Proceedings of the 16th ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications. OOPSLA 2001. pp. 367––380. doi:10.1145/504282.504309.
- Yossi Levanoni, Erez Petrank (2006). An on-the-fly reference-counting garbage collector for java. "ACM Trans. Program. Lang. Syst.". ACM Trans. Program. Lang. Syst. 28: 31–69. doi:10.1145/1111596.1111597.
- Stephen Blackburn, Kathryn McKinley (2003). "Ulterior Reference Counting: Fast Garbage Collection without a Long Wait". Proceedings of the 18th annual ACM SIGPLAN conference on Object-oriented programing, systems, languages, and applications. OOPSLA 2003. pp. 344–358. doi:10.1145/949305.949336. ISBN 1-58113-712-5.
- Harel Paz, David F. Bacon, Elliot K. Kolodner, Erez Petrank, V. T. Rajan (2007). An efficient on-the-fly cycle collection. "ACM Transactions on Programming Languages and Systems (TOPLAS)". TOPLAS. doi:10.1145/1255450.1255453.
- The Memory Manager Reference: Beginner's Guide: Recycling: Reference Counts
- Minimizing Reference Count Updating with Deferred and Anchored Pointers for Functional Data Structures, Henry G. Baker
- Concurrent Cycle Collection in Reference Counted Systems, David F. Bacon
- An On-the-Fly Reference-Counting Garbage Collector for Java, Yossi Levanoni and Erez Petrank
- Atomic Reference Counting Pointers: A lock-free, async-free, thread-safe, multiprocessor-safe reference counting pointer, Kirk Reinholtz
- Extending and Embedding the Python Interpreter: Extending Python with C or C++: Reference Counts, Guido van Rossum
- Down for the count? Getting reference counting back in the ring, Rifat Shahriyar, Stephen M. Blackburn and Daniel Frampton | 1 | 4 |
<urn:uuid:2b33c6af-df27-468d-acbb-e47a18bb8d70> | (background of 13th Century polychrome pottery patterns from Four Mile dig in east-central Arizona, drawn by Mac Schweitzer.)
Arizona Indians Survived
the Loss of Their World
Arizona’s indigenous peoples left no written archive for historians. Most of what we know about the Native American experience has been documented by Anglos, making tribal history an exercise in western European thinking. Therefore, it is insightful to relate American Indian history from a postcolonial, multicultural perspective. In any case, the profound conflict between Native and European values made Indian life really difficult over the years. When Europeans arrived in the southwest, the people already living there would have been happy to share some tobacco over a campfire. Instead, they were surprised and dismayed to learn that these strangers shared nothing.
This encounter happened at the height of the Renaissance and Reformation in Europe. It was not an age of tolerance. And Spain in particular was still acting out the horrors of the Inquisition. European civilization was structured along class lines, with an aristocracy and monarchy that demanded complete fealty. Science and the Reformation challenged the Catholic Church. Merchants and new technologies offered an alternative to the accumulation of wealth via Feudalism. It was an age of conflict. And it had become accepted in the eastern hemisphere that the strong should dominate the weak.
Europeans enforced real property ownership on Indians who saw no property lines on the ground and had difficulty understanding the concept of possession of dirt by absentee owners. Conquistadors demanded obeisance by conquered tribes to a king Indians would never meet. Padres tolerated no unorthodox rituals or beliefs, destroyed idols and required families to perform forced labor that benefited the aristocracy and the religious order. Conscientious objectors were subjected to torture, hanging, garroting, beheading, dismemberment or amputation. It must have been obvious early on that Europeans considered Indians an inferior conquered population. And yet Indians were shy but proud, placing great importance on personal respect. The clashing of conflicting beliefs initiated several centuries of Indian rebellions and European punitive expeditions.
The Apache practice of killing whole families or taking women and children captive spread fear and inflamed Anglo resentment. With great difficulty, the US military disarmed and confined each renegade band, culminating in the surrender of Geronimo’s Chiricahuas in 1886. Just before the official surrender, Tombstone photographer C. S. Fly set up a series of poses like this one of Geronimo and his still armed families. Six months earlier in New Mexico, 11-year-old Santiago “Jimmy” McKinn rode off with the Indians as they killed his older brother. The two boys were away from home caring for horses and the Apaches needed more horses. After the surrender, Santiago, who already spoke English and Spanish, was returned to his father fluent in Apache. Geronimo and his men, women and children were sent east for 27 years of confinement. Santiago grew up to become a blacksmith in Silver City, New Mexico. Later he moved with his wife and children to Phoenix where he died in the 1950s. The boy with bow and arrow has been identified as Garditha, a 10-year-old orphan who became the uncle of the influential Fort Sill leader Robert Gooday. There is an accompanying photo showing Garditha and the other boys brandishing rifles along side the men. It is believed he died during confinement in Florida or Alabama. The man in white shirt, sitting at right may be the 20-year-old Zhonne, brother-in-law of the leader Natchez (Naiche). In several of the photos he is holding his baby. Zhonne was sent to Carlisle Indian School in Pennsylvania for rehabilitation where he is later pictured with short hair in a stylish suit. As Calvin Zhonne, he was eventually reunited with his wife and children and they went in 1913, the year imprisonment of the Chiricahuas ended, to live on the Mescalero reservation in New Mexico.
The practice of pagan rituals was proof to the white man of the need for change in Indian behavior. Fervent and persistent missionary efforts were introduced on every reservation to persuade natives to turn away from the devil. It was an exercise in liberal idealism, a belief in repentance and rehabilitation. As late as the 1950s, when Mike Roberts issued this postcard, many were still calling this ceremony the “Apache devil dance.” The White Mountain Apaches call the men Gann dancers or crown dancers. Representing the life within the living mountains, the Gann appear at many ceremonies. They are seen here as part of the Sunrise Ceremony, a puberty rite for girls. While many Apaches have become devoted Christians and church members, traditional practices continue.
Adventurers from the United States eventually took possession of the southwest by military force and imposed their own liberal democratic ideals. The land, its animals, minerals, plants and people, was an economic resource that must be used to further the progress of civilization. Native peoples became wards of the state, confined to government reserves for their salvation and education. A debate ensued, between those who advocated extirpation and those who argued for assimilation. Democracy demanded that any resolution of that debate remain tentative. Those who rejected liberal ideals supported violent solutions like the infamous Bascom Affair and the Camp Grant Massacre.
San Carlos Indian Agent John P. Clum summed up the military war against the Apaches from 1862 to 1871 as more than $38 million spent to kill less than 100 Indians, including old men, women and children, at the cost of the lives of more than 1,000 Anglo soldiers and civilians. (p. 67, Baldwin, The Apache Indians, 1978) Then in 1871, about 125 Aravaipa Apaches, mostly women and children, were massacred by Tucson vigilantes while living under the supervision of the military at Camp Grant. It has been estimated that several hundred Yavapai Indians, mistaken for Apaches, were killed between 1864 and 1876, including at least 75 slaughtered at Skeleton Cave in 1872. Unfortunately, the expense and killing would continue until at least 1886. And the economy of Arizona Territory suffered. Finally, with the appointment of General Crook to deal with the Apache problem, the great Arizona copper magnate James Douglas noted that resolution was finally at hand. “The true remedy lies in supplying the Indian with the means of supporting himself and training him to live side by side in healthy rivalry with the white man, not in fostering race distinction by isolating the Indian on his reservation and excluding the white man from mines that the Indian cannot work and pastures that he cannot occupy.” (pp. 78-79, H. H. Langton, James Douglas A Memoir, 1940)
In a harsh environment, hunter-gatherer tribes had to be constantly on the move, often taking from others what they needed to survive. For many years after they were confined on reservations, learning to farm for a living, they were dependent upon food bank handouts from the federal government. This detail from a stereoscopic card by Rothrock of Phoenix shows “count” and “ration day” on the San Carlos Apache Reservation about 1878. The people would be counted to ensure none had left the reservation and then issued European foodstuffs. In recent years there has been a drive to reintroduce traditional foods in the Native American diet in order to treat obesity and diabetes.
At the end of the Indian wars, the prevailing idea, irrespective of the facts, was that Native Americans were a dying race, with their only hope that of integration into American society. Government Indian boarding schools were instituted toward this end, and Phoenix had one of the biggest. This is a view of a Phoenix dormitory in the 1890s. Some old postcards call this the “Boys Hall,” while others label the same view “Girls Dormitory.” Phoenix Indian School was established in 1891 and closed in 1990. Historical archaeologist Owen Lindauer has commented, “Many students were forcibly separated from their parents, and the rapid personal transformation demanded of pupils was facilitated through a draconian and abrupt detachment from tribal cultural patterns.” (“Archaeology of the Phoenix Indian School,” March 27, 1998, Archaeological Institute of America)
Phoenix Indian School was big on band, sports and military classes. This is a group of officers with their Anglo commander about 1912. Many Native Americans seemed to welcome military life and became heroic soldiers and marines. A group of Geronimo’s warriors were recruited for Company I, 12th Infantry, US Army. Previously, Apache and Yavapai army scouts were General Crook’s secret weapon against Geronimo. Actually, rather than secret they became storied, remaining a part of the Army Signal Corps into the 1920s. And then the use of code talkers during World War II has recently become equally legendary.
This is a classroom in a Hualapai school near Kingman around 1900. They look glum, but even Anglo children knew it would be rude to smile for the camera in those days. And their rather rude classroom was probably no worse than many Arizona rural school buildings at the time. Those are maps of the continents decorating the walls above the hat pegs. (click on photo to see full size) In 1900, many Anglo kids couldn’t afford shoes either. This is likely a day school instead of a boarding school. Still, most Indian schools failed to produce fully integrated citizens. In the words of former Commissioner of Indian Affairs Francis Leupp, writing in 1910, the Indians did not fail in their quest for an education but the schools failed the Indians. (p. 129, Dejong, Promises of the Past, 1993) Well educated Natives Americans would continue to have difficulty competing in the job market, excelling at liberal politics or finding happiness at home in a country where half of all marriages fail.
In the end, European civilization came to dominate the western hemisphere. Jared Diamond (Guns, Germs & Steel, 1997) has shown how chance circumstances of geography denied Native Americans the usefulness of steel tools, animals capable of domestication and warriors with the force of gunpowder. Indigenous Americans were certainly intelligent and clever enough and their ideals were no more debilitating than the Machiavellianism and chivalry of the old world. As soon as they got their hands on horses, sheep and cattle, knives, guns and badges, Native Americans quickly demonstrated their skill with these implements.
The Indian Citizenship Act of 1924 (Snyder Act) granted full US citizenship to Indians. That didn’t mean they could vote. Voting rights have never been accorded all citizens in the US. The Indian Reorganization Act of 1934 established a framework for tribal government on reservations. Native Americans would now be able to elect tribal leaders. The service of ethnic minorities in World War II opened the door to participation in civil society following the war. When Native Americans were refused voter registration in Arizona because they were “wards of the government” they went to court and won a judgment in their favor July 15, 1948 in the Arizona Supreme Court. County election officials then turned to literacy tests to disqualify Indian voters. Local governments found ways around the Voting Rights Act of 1965, which outlawed discriminatory practices. When the first Navajo was elected to the Apache County Board of Supervisors in 1972, the Anglo supervisors refused to certify his election. Again, the Arizona Supreme Court ruled in favor of Indian electoral rights. (109 Ariz. 510, 1973) But disqualification of ballots, voters and candidates on technicalities continues.
The Quechan or Yuma people operated a ferry service across the Colorado River at Fort Yuma for forty-niners going to California. Anglo entrepreneurs then started a competing ferry. When the Quechan tried to block access the Anglos got the military to intervene. Belligerent Indians ran the troops out of the fort for a year, but soldiers returned and put the Quechan out of business for good. As Yuma’s commercial district grew, its indigenous residents were criticized for indolence and alcoholism. In this view from about 1910 with the Southern Pacific railroad depot and hotel behind the trees, a few industrious women offer train passengers their crafts. The train is just out of view at left. The little house with lots of windows is a produce exhibit. This was when the SP main line crossed the river near the old quartermaster depot and went down the middle of Madison Avenue. The bridge, depot and tracks were removed from 1927-1966. Recently this whole area of Yuma was bulldozed for development, leaving it completely unrecognizable as a historic district.
Many of Arizona’s natives produced works of art with great value. Navajo blankets, Pima baskets and Hopi pottery were admired around the world after display at a number of international expositions. This unnamed Hopi woman appears to be making everyday utilitarian vessels rather than the highly polished and finely painted pueblo ware produced during the same time period. Her clay has been ground on a stone and mixed in a bowl and the pot is fashioned from coils. The picture is a colorized black & white photo copyright 1899 by Detroit Photographic Company, which issued it as a postcard.
After a couple bloody rebellions, most of the Tohono O’odam people eventually embraced the Catholic church and labored from 1783 until 1797 building this still astonishingly beautiful work of colonial architecture, Mission San Xavier del Bac, south of Tucson. The architecture has been attributed to Ignacio Gaona but there is no documentation. Padre Kino, who first visited the area in 1692, established the mission in 1700 at a site two miles from the present church. Following the creation of the Mexican republic, the friars left in 1828, not to return until 1911. During that time, faithful Tohono O’odam caretakers preserved the building. This postcard view shows the church after repairs were completed in 1906 following an 1887 earthquake. (Celestine Chinn, Mission San Xavier del Bac, 1951)
Judith Harlan, American Indians Today, (1987)
Robert H. Jackson, Indian Population Decline, (1994)
Levin & Lurie, eds., The American Indian Today, (1970)
Richard C. McCormick, Arizona: Its Resources and Prospects, (1865)
Edward H. Spicer, Cycles of Conquest, (1962) | 1 | 4 |
<urn:uuid:97e484f3-8721-4043-880a-f870c37cf65e> | ||This article has multiple issues. Please help improve it or discuss these issues on the talk page.
|Look up pleonasm in Wiktionary, the free dictionary.|
Pleonasm (from Greek, pleon: more, too much) is the use of more words or word-parts than is necessary for clear expression: examples are black darkness, or burning fire. Such redundancy is, by traditional rhetorical criteria, a manifestation of tautology.
Often, pleonasm is understood to mean a word or phrase which is useless, clichéd, or repetitive, but a pleonasm can also be simply an unremarkable use of idiom. It can even aid in achieving a specific linguistic effect, be it social, poetic, or literary. In particular, pleonasm sometimes serves the same function as rhetorical repetition—it can be used to reinforce an idea, contention or question, rendering writing clearer and easier to understand. Further, pleonasm can serve as a redundancy check: If a word is unknown, misunderstood, or misheard, or the medium of communication is poor—a wireless telephone connection or sloppy handwriting—pleonastic phrases can help ensure that the entire meaning gets across even if some of the words get lost.
Some pleonastic phrases are part of a language's idiom, like "tuna fish"[disputed ] and "safe haven" in English. They are so common that their use is unremarkable, although in many cases the redundancy can be dropped with no loss of meaning.
When expressing possibility, English speakers often use potentially pleonastic expressions such as: It may be possible or maybe it's possible, where both terms (verb may/adverb maybe and adjective possible) have the same meaning under certain constructions. Many speakers of English use such expressions for possibility in general, such that most instances of such expressions by those speakers are in fact pleonastic. Others, however, use this expression only to indicate a distinction between ontological possibility and epistemic possibility, as in "Both the ontological possibility of X under current conditions and the ontological impossibility of X under current conditions are epistemically possible" (in logical terms, "I am not aware of any facts inconsistent with the truth of proposition X, but I am likewise not aware of any facts inconsistent with the truth of the negation of X"). The habitual use of the double construction to indicate possibility per se is far less widespread among speakers of most other languages, (except in Spanish where people; see two examples**) rather, almost all speakers of those languages use one term in a single expression:
- French: Il est possible or il peut arriver.
- Romanian: Este posibil or se poate întâmpla.
- Typical Spanish pleonasms
Voy a subir arriba - I am going to go up upstairs, "arriba" not being necessary. Entra para dentro - Go in inside, "para dentro" not being necessary.
In a satellite-framed language like English, verb phrases containing particles that denote direction of motion are so frequent that even when such a particle is pleonastic, it seems natural to include it (e.g. "enter into").
Professional and scholarly use
Some pleonastic phrases, when used in professional or scholarly writing, may reflect a standardized usage that has evolved or a meaning familiar to specialists but not necessarily to those outside that discipline. Such examples as "null and void", "terms and conditions", "each and all" are legal doublets that are part of legally operative language that is often drafted into legal documents. A classic example of such usage was that by the Lord Chancellor at the time (1864), Lord Westbury, in the English case of ex parte Gorely, when he described a phrase in an Act as "redundant and pleonastic". Although this type of usage may be favoured in certain contexts, it may also be disfavoured when used gratuitously to portray false erudition, obfuscate or otherwise introduce verbiage. This is especially so in disciplines where imprecision may introduce ambiguities (such as the natural sciences).
In addition, pleonasms can serve purposes external to meaning. For example, a speaker who is too terse is often interpreted as lacking ease or grace, because, in oral and sign language, sentences are spontaneously created without the benefit of editing. The restriction on the ability to plan often creates much redundancy. In written language, removing words not strictly necessary sometimes makes writing seem stilted or awkward, especially if the words are cut from an idiomatic expression.
On the other hand, as is the case with any literary or rhetorical effect, excessive use of pleonasm weakens writing and speech; superfluous words distract from the content. Writers wanting to conceal a thought or a purpose obscure their meaning with verbiage. William Strunk Jr. advocated concision in The Elements of Style (1918):
Vigorous writing is concise. A sentence should contain no unnecessary words, a paragraph no unnecessary sentences, for the same reason that a drawing should have no unnecessary lines and a machine no unnecessary parts. This requires not that the writer make all his sentences short, or that he avoid all detail and treat his subjects only in outline, but that every word tell.
- "This was the most unkindest cut of all." —William Shakespeare, Julius Caesar.
- "Beyond the garage were some decorative trees trimmed as carefully as poodle dogs." —Raymond Chandler, The Big Sleep.
- "Let me tell you this, when social workers offer you, free, gratis and for nothing, something to hinder you from swooning, which with them is an obsession, it is useless to recoil ..." —Samuel Beckett, Molloy.
||This section may contain original research. (September 2010)|
There are two kinds of pleonasm: syntactic pleonasm and semantic pleonasm.
- "I know you are coming."
- "I know that you are coming."
In this construction, the conjunction that is optional when joining a sentence to a verb phrase with know. Both sentences are grammatically correct, but the word that is pleonastic in this case. By contrast, when a sentence is in spoken form and the verb involved is one of assertion, the use of that makes clear that the present speaker is making an indirect rather than a direct quotation, such that s/he is not imputing particular words to the person s/he describes as having made an assertion; the demonstrative adjective that also does not fit such an example. Also, some writers may use "that" for technical clarity reasons. In some languages, such as French, the word is not optional and should therefore not be considered pleonastic.
The same phenomenon occurs in Spanish with subject pronouns. Since Spanish is a null-subject language, which allows subject pronouns to be deleted when understood, the following sentences mean the same:
- "Yo te amo."
- "Te amo."
In this case, the pronoun yo ("I") is grammatically optional; both sentences mean "I love you" (however, they may not have the same tone or intention—this depends on pragmatics rather than grammar). Such differing but syntactically equivalent constructions, in many languages, may also indicate a difference in register.
The process of deleting pronouns is called pro-dropping, and it also happens in many other languages, such as Korean, Japanese, Hungarian, Latin, Portuguese, Scandinavian languages some Slavic languages, and the Lao language.
In contrast, formal English requires an overt subject in each clause. A sentence may not need a subject to have valid meaning, but to satisfy the syntactic requirement for an explicit subject a pleonastic (or dummy) pronoun is used; only the first sentence in the following pair is acceptable English:
- "It rains."
In this example the pleonastic "it" fills the subject function, however it does not contribute any meaning to the sentence. The second sentence, which omits the pleonastic it is marked as ungrammatical although no meaning is lost by the omission. Elements such as "it", or "there" serving as empty subject markers are also called (syntactic) expletives, and also dummy pronouns.
The pleonastic ne (ne pléonastique) expressing uncertainty in formal French works as follows:
- "Je crains qu'il ne pleuve."
("I fear it may rain.")
- "Ces idées sont plus difficiles à comprendre que je ne pensais."
("These ideas are harder to understand than I thought.")
Two more striking examples of French pleonastic construction are the word "aujourd'hui" translated as "today", but originally meaning "on the day of today", and the phrase "Qu'est-ce que c'est?" meaning "What's that?" or "What is it?", while literally it means "What is it that it is?".
There are examples of the pleonastic, or dummy, negative in English, such as the construction, heard in the New England region of the United States, in which the phrase "So don't I" is intended to have the same positive meaning as "So do I."
When Robert South said, "It is a pleonasam [sic], a figure usual in Scripture, by a multiplicity of expressions to signify one notable thing," he was observing the Biblical Hebrew poetic propensity to repeat thoughts in different words, since written Biblical Hebrew was a comparatively early form of written language and was written using oral patterning, which has many pleonasms. In particular, very many verses of the Psalms are split into two halves, each of which says much the same thing in different words. The complex rules and forms of written language as distinct from spoken language were not as well-developed as they are today when the books making up the Old Testament were written. See also parallelism (rhetoric).
This same pleonastic style remains very common in modern poetry and songwriting (e.g., "Anne, with her father / is out in the boat / riding the water / riding the waves / on the sea", from Peter Gabriel's "Mercy Street").
Types of syntactic pleonasm
- Overinflection: Many languages with inflection, as a result of convention, tend to inflect more words in a given phrase than actually needed in order to express a single grammatical property. Take for example the German, Die alten Frauen sprechen. Even though the first two words in the phrase (being the definite article and then a qualifier in this case), let us know right away that the grammatical number of the noun phrase is plural, the German language still dictates that the attributive adjective, the noun which is our subject, and the verb undertaken by our subject all must also express and agree in grammatical number. Not all languages are quite as redundant however, and will in fact omit inflection for number when there is an obvious numerical marker, as is the case with Hungarian, which does have a plural proper, but would express two flowers, as two flowerØ. The main contrast between Hungarian and other tongues such as German or even English (to a lesser extent), is that in either of the latter, expressing plurality when already evident is not optional, but mandatory; making the neglect of these rules result in an ungrammatical sentence. As well as for number, our aforementioned German phrase overinflects for both grammatical gender and grammatical case.
- Multiple Negation: In some languages, repeated negation may be used for emphasis, as in the English sentence, "There ain't nothing wrong with that." While a literal interpretation of this sentence would be "There is not nothing wrong with that," i.e. "There is something wrong with that," the intended meaning is in fact the opposite: "There is nothing wrong with that" or "There isn't anything wrong with that." The repeated negation is used pleonastically for emphasis. However, this is not always the case. In the sentence "I don't not like it," the repeated negative may be used to convey ambivalence ("I neither like nor dislike it") or even affirmation ("I do like it"). (Rhetorically, this becomes the device of litotes; it can be difficult to distinguish litotes from pleonastic double negation, a feature which may be used for ironic effect.) Although the use of "double negatives" for emphatic purposes is sometimes discouraged in standard English, it is mandatory in other languages like Spanish or French. For example, the Spanish phrase "No es nada" (It's nothing) contains both a negated verb ("no es") and the negative form of anything/nothing ("nada").
- Multiple Affirmation: In English, repeated affirmation can be used to add emphasis to an affirmative statement, just as repeated negation can add emphasis to a negative one. When we say something along the lines of, I do love you, with a stronger intonation on the do, we are putting double affirmation into use. This is because all languages, by default, automatically express their sentences in the affirmative and must then alter the sentence in one way or another to express the opposite. Therefore, the sentence I love you is already affirmative, and adding the extra do only adds emphasis and does not change the meaning of the statement.
- Double Possession: The double genitive of English, which we can see in a phrase like, a friend of mine, today is pretty much the norm throughout. The redundancy here lies in the use of mine in place of the usual prepositional pronoun me, despite the fact that the function word of already connects said friend, to the me currently speaking. Another example of this possession phenomenon might be the sociolectal use of the possessive mine's, although this might in fact just be an overextension based on the ending pattern of the other possessive pronouns, like his, ours, yours, etc.
- Multiple Quality Gradation: In English, different degrees of comparison (comparatives and superlatives) are created through a morphological change to an adjective (e.g. "prettier", "fastest") or a syntactic construction (e.g. "more complex", "most impressive"). It is thus possible to combine both forms for additional emphasis: "more bigger" or "bestest". This may be considered ungrammatical, but is common in informal speech for some English speakers.
- Not all uses of constructions such as "more bigger" are pleonastic, however. Some speakers who use such utterances do so in an attempt, albeit a grammatically unconventional one, to create a non-pleonastic construction: A person who says "X is more bigger than Y" may, in the context of a conversation featuring a previous comparison of some object Z with Y, mean "The degree by which X exceeds Y in size is greater than the degree by which Z exceeds Y in size." This usage amounts to the treatment of "bigger than Y" as a single grammatical unit, namely an adjective itself admitting of degrees, such that "X is more bigger than Y" is equivalent to "X is more bigger-than-Y than Z is." Another common way to express this is: "X is even bigger than Z."
Semantic pleonasm is more a question of style and usage than grammar. Linguists usually call this redundancy to avoid confusion with syntactic pleonasm, a more important phenomenon for theoretical linguistics. It can take various forms, including:
- Overlap: One word's semantic component is subsumed by the other:
- "Receive a free gift with every purchase."
- "I ate a tuna fish sandwich."
- "The plumber fixed our hot water heater." (This pleonasm was famously attacked by American comedian George Carlin, but is not truly redundant; a device that increases the temperature of cold water to room temperature would also be a water heater, although it did not heat hot water.)
- Prolixity: A phrase may have words which add nothing, or nothing logical or relevant, to the meaning.
- "I'm going down south."
(South is not really "down", it is just drawn that way on maps by convention.)
- "You can't seem to face up to the facts."
- "He entered into the room."
- "What therefore God hath joined together, let no man put asunder."
- "He raised up his hands in a gesture of surrender."
- "Where are you at?"
- "located" or similar before a preposition: the preposition contains the idea of locatedness and does not need a servant.
- "the actual house itself" for "the house", and similar: unnecessary re-specifiers.
- "Actual fact": fact.
- "On a daily basis": daily.
- "On a --ly basis": --ly.
- "This particular item": this item.
- "Different" or "separate" after numbers: for example:
- "4 different species" are merely "4 species", as two non-different species are together one same species.
- "9 separate cars": cars are always separate.
An expression like "tuna fish", however, might elicit one of many possible responses, such as:
- It will simply be accepted as synonymous with "tuna".
- It will be perceived as redundant (and thus perhaps silly, illogical, ignorant, inefficient, dialectal, odd, and/or intentionally humorous).
- It will imply a distinction. A reader of "tuna fish" could properly wonder: "Is there a kind of tuna which is not a fish? There is, after all, a dolphin mammal and a dolphin fish." This assumption turns out to be correct, as a "tuna" can also mean a prickly pear. Further, "tuna fish" is sometimes used to refer to the flesh of the animal as opposed to the animal itself (similar to the distinction between beef and cattle).[disputed ]
- It will be perceived as a verbal clarification, since the word "tuna" is quite short, and may, for example, be misheard as "tune" followed by an aspiration, or (in dialects that drop the final -r sound) as "tuner".
This is a good reason for careful speakers and writers to be aware of pleonasms, especially with cases such as "tuna fish", which is normally used only in some dialects of American English, and would sound strange in other variants of the language, and even odder in translation into other languages.
A similar situation is "ink pen" instead of just "pen" in the southern United States, where "pen" and "pin" are pronounced similarly. Or you could decide to order some "extra accessories" with your new camera, where a certain set of accessories are provided as part of the package and others must be ordered separately.
Note that not all constructions that are typically pleonasms are so in all cases, nor are all constructions derived from pleonasms themselves pleonastic:
- "Put that glass over there on the table."
(Could, depending on room layout, mean "Put that glass on the table across the room, not the table right in front of you"; if the room were laid out like that, most English speakers would intuitively understand that the distant, not immediate table was the one being referred to; however, if there were only one table in the room, the phrase would indeed be pleonastic. Also, it could mean, "Put that glass on that certain spot on the table"; thus in this case it is not pleonastic.)
- "I'm going way down South."
(May imply "I'm going much farther south than you might think if I didn't stress the southerliness of my destination"; but such phrasing is also sometimes—and sometimes jokingly—used pleonastically when simply "south" would do; it depends upon the context, the intent of the speaker/writer, and ultimately even on the expectations of the listener/reader.)
Morphemes, not just words, can enter the realm of pleonasm: Some word-parts are simply optional in various languages and dialects. A familiar example to American English speakers would be the allegedly optional "-al-", probably most commonly seen in "publically" vs. "publicly" – both spellings are considered correct/acceptable in American English, and both pronounced the same, in this dialect, rendering the "publically" spelling pleonastic in US English; in other dialects it is "required", while it is quite conceivable that in another generation or so of American English it will be "forbidden". This treatment of words ending in "-ic", "-ac", etc., is quite inconsistent in US English – compare "maniacally" or "forensically" with "stoicly" or "heroicly"; "forensicly" doesn't look "right" in any dialect, but "heroically" looks internally redundant to many Americans. (Likewise, there are thousands of mostly American Google search results for "eroticly", some in reputable publications, but it does not even appear in the 23-volume, 23,000-page, 500,000-definition Oxford English Dictionary, the largest in the world; and even American dictionaries give the correct spelling as "erotically".) In a more modern pair of words, Institute of Electrical and Electronics Engineers dictionaries say that "electric" and "electrical" mean exactly the same thing. However, the usual adverb form is "electrically". (For example, "The glass rod is electrically charged by rubbing it with silk".)
Some (mostly US-based) prescriptive grammar pundits would say that the "-ly" not "-ally" form is "correct" in any case in which there is no "-ical" variant of the basic word, and vice versa; i.e. "maniacally", not "maniacly", is correct because "maniacal" is a word, while "publicly", not "publically", must be correct because "publical" is (arguably) not a real word (it does not appear in the OED). This logic is in doubt, since most if not all "-ical" constructions arguably are "real" words and most have certainly occurred more than once in "reputable" publications, and are also immediately understood by any educated reader of English even if they "look funny" to some, or do not appear in popular dictionaries. Additionally, there are numerous examples of words that have very widely accepted extended forms that have skipped one or more intermediary forms, e.g. "disestablishmentarian" in the absence of "disestablishmentary" (which does not appear in the OED). At any rate, while some US editors might consider "-ally" vs. "-ly" to be pleonastic in some cases, the vast majority of other English speakers would not, and many "-ally" words are not pleonastic to anyone, even in American English.
The most common definitely pleonastic morphological usage in English is "irregardless", which is very widely criticised as being a non-word. The standard usage is "regardless", which is already negative; adding the negative prefix ir- is worse than redundant, becoming oxymoronic as it logically reverses the meaning to "with regard to/for", which is certainly not what the speaker intended to convey. (According to most dictionaries that include it, "irregardless" appears to derive from confusion between "regardless" and "irrespective", which have overlapping meanings.)
In some cases, the redundancy in meaning occurs at a syntactic level above the word, such as at the phrase level:
- "It's déjà vu all over again."
- "I never make predictions, especially about the future."
The redundancy of these two well-known statements is deliberate, for humorous effect. (See Yogiisms.) But one does hear educated people say "my predictions about the future of politics" for "my predictions about politics", which are equivalent in meaning. While predictions are necessarily about the future (at least in relation to the time the prediction was made), the nature of this future can be subtle (e.g., "I predict that he died a week ago"—the prediction is about future discovery or proof of the date of death, not about the death itself). Generally "the future" is assumed, making most constructions of this sort pleonastic. Yogi Berra's humorous quote above about not making predictions isn't really a pleonasm, but rather an ironic play on words.
But "It's déjà vu all over again." could mean that there was earlier another déjà vu of the same event or idea, which has now arisen for a third time.
Redundancy, and "useless" or "nonsensical" words (or phrases, or morphemes) can also be inherited by one language from the influence of another, and are not pleonasms in the more critical sense, but actual changes in grammatical construction considered to be required for "proper" usage in the language or dialect in question. Irish English, for example, is prone to a number of constructions that non-Irish speakers find strange and sometimes directly confusing or silly:
- "I'm after putting it on the table."
("I (have) put it on the table". This example further shows that the effect, whether pleonastic or only pseudo-pleonastic, can apply to words and word-parts, and multi-word phrases, given that the fullest rendition would be "I am after putting it on the table".)
- "Have a look at your man there."
("Have a look at that man there"; an example of word substitution, rather than addition, that seems illogical outside the dialect. This common possessive-seeming construction often confuses the non-Irish enough that they do not at first understand what is meant. Even "have a look at that man there" is arguably further doubly redundant, in that a shorter "look at that man" version would convey essentially the same meaning.)
- "She's my wife so she is."
("She's my wife." Duplicate subject and verb, post-complement, used to emphasize a simple factual statement or assertion.)
All of these constructions originate from the application of Irish Gaelic grammatical rules to the English dialect spoken, in varying particular forms, throughout the island.
Seemingly "useless" additions and substitutions must be contrasted with similar constructions that are used for stress, humour or other intentional purposes, such as:
- "I abso-fuckin-lutely agree!"
(tmesis, for stress)
- "Topless-shmopless—nudity doesn't distract me."
(shm-reduplication, for humour)
The latter of these is a result of Yiddish influences on modern English, especially East Coast US English.
- "The sound of the loud music drowned out the sound of the burglary."
- "The loud music drowned out the sound of the burglary."
- "The music drowned out the burglary."
The reader or hearer does not have to be told that loud music has a sound, and in a newspaper headline or other abbreviated prose can even be counted upon to infer that "burglary" is a proxy for "sound of the burglary" and that the music necessarily must have been loud to drown it out, unless the burglary was relatively quiet (this is not a trivial issue, as it may affect the legal culpability of the person who played the music); the word "loud" may imply that the music should have been played quietly if at all. Many are critical of the excessively abbreviated constructions of "headline-itis" or "newsspeak", so "loud [music]" and "sound of the [burglary]" in the above example should probably not be properly regarded as pleonastic or otherwise genuinely redundant, but simply as informative and clarifying.
Prolixity is also used simply to obfuscate, confuse or euphemise, and is not necessarily redundant/pleonastic in such constructions, though it often is. "Post-traumatic stress disorder" (shellshock) and "pre-owned vehicle" (used car) are both tumid euphemisms but are not redundant. Redundant forms, however, are especially common in business, political and even academic language that is intended to sound impressive (or to be vague so as to make it hard to determine what is actually being promised, or otherwise misleading), For example: "This quarter, we are presently focusing with determination on an all-new, innovative integrated methodology and framework for rapid expansion of customer-oriented external programs designed and developed to bring the company's consumer-first paradigm into the marketplace as quickly as possible."
In contrast to redundancy, an oxymoron results when two seemingly contradictory words are adjoined.
Redundancies sometimes take the form of foreign words whose meaning is repeated in the context:
- "We went to the 'El Restaurante' restaurant."
- "The La Brea tar pits are fascinating."
- "Roast beef served with au jus."
- "Please R.S.V.P."
- "The Schwarzwald Forest is deep and dark."
- "The Drakensberg Mountains are in South Africa."
- LibreOffice office suite.
These sentences use phrases which mean, respectively, "the the restaurant restaurant", "the the tar tar", and "with with". However, many times these redundancies are necessary — especially when the foreign words make up a proper noun as opposed to a common one. For example, "We went to Il Ristorante" is acceptable provided your audience can infer that it is a restaurant (if they understand Italian and English it might likely, if spoken rather than written, be misinterpreted as a generic reference and not a proper noun, leading the hearer to ask "Which ristorante do you mean?" Such confusions are common in richly bi-lingual areas like Montreal or the American Southwest when people mix phrases from two languages at once). But avoiding the redundancy of the Spanish phrase in the second example would only leave you with an awkward alternative: "La Brea pits are fascinating."
Most find it best to not even drop articles when using proper nouns made from foreign languages:
- "The movie is playing at the 'El Capitan' theater."
This is also similar to the treatment of definite and indefinite articles in titles of books, films, etc., where the article can — indeed "must" — be present where it would otherwise be "forbidden":
- "Stephen King's 'The Shining' is scary."
(Normally, the article would be left off following a possessive.)
- "I'm having an 'An American Werewolf in London' movie night at my place."
(Seemingly doubled article, which would be taken for a stutter or typographical error in other contexts.)
Some cross-linguistic redundancies, especially in placenames, occur because a word in one language became the title of a place in another (e.g., the Sahara Desert—"Sahara" is an English approximation of the word for "deserts" in Arabic). An extreme example is Torpenhow Hill in Cumbria, the name of which is composed of words that essentially mean "hill" in the language of each of the cultures that have lived in the area during recorded history, such that it could be translated as "Hillhillhill Hill". See the List of tautological place names for many more examples.
Acronyms can also form the basis for redundancies; this is known humorously as RAS syndrome (for Redundant Acronym Syndrome Syndrome):
- "I forgot my PIN number for the ATM machine."
- "I upgraded the RAM memory of my computer."
- "She is infected with the HIV virus."
- "I have installed a CMS system on my server."
In all the examples listed above, the word after the acronym repeats a word represented in the acronym—respectively, "Personal Identification Number number", "Automated Teller Machine machine", "Random Access Memory memory", "Human Immunodeficiency Virus virus", "Content Management System system". (See RAS syndrome for many more examples.) The expansion of an acronym like PIN or HIV may be well known to English speakers, but the acronyms themselves have come to be treated as words, so little thought is given to what their expansion is (and "PIN" is also pronounced the same as the word "pin"; disambiguation is probably the source of "PIN number"; "SIN number" for "Social Insurance Number number" [sic] is a similar common phrase in Canada.) But redundant acronyms are more common with technical (e.g. computer) terms where well-informed speakers recognize the redundancy and consider it silly or ignorant, but mainstream users might not, since they may not be aware or certain of the full expansion of an acronym like "RAM".
Some redundancies are simply typographical. For instance, when a short inflexional word like "the" occurs at the end of a line, it is very common to accidentally repeat it at the beginning of the line, and large number of readers would not even notice it.
Carefully constructed expressions, especially in poetry and political language, but also some general usages in everyday speech, may appear to be redundant but are not. This is most common with cognate objects (a verb's object that is cognate with the verb):
- "She slept a deep sleep.
Or, a classic example from Latin:
- "mutatis mutandis" = "with change made to what needs to be changed" (an ablative absolute construction)
The words need not be etymologically related, but simply conceptually, to be considered an example of cognate object:
- "We wept tears of joy."
Such constructions are not actually redundant (unlike "She slept a sleep" or "We wept tears") because the object's modifiers provide additional information. A rarer, more constructed form is polyptoton, the stylistic repetition of the same word or words derived from the same root:
- "...[T]he only thing we have to fear is fear itself."—Franklin D. Roosevelt, "First Inaugural Address", March 1933.
- "With eager feeding[,] food doth choke the feeder."—William Shakespeare, Richard II (play), II, i, 37.
As with cognate objects, these constructions are not redundant because the repeated words or derivatives cannot be removed without removing meaning or even destroying the sentence, though in most cases they could be replaced with non-related synonyms at the cost of style (e.g., compare "The only thing we have to fear is terror".)
Semantic pleonasm and context
In many cases of semantic pleonasm, the status of a word as pleonastic depends on context. The relevant context can be as local as a neighbouring word, or as global as the extent of a speaker's knowledge. In fact, many examples of redundant expressions are not inherently redundant, but can be redundant if used one way, and are not redundant if used another way. The "up" in "climb up" is not always redundant, as in the example "He climbed up and then fell down the mountain." Many other examples of pleonasm are redundant only if the speaker's knowledge is taken into account. For example, most English speakers would agree that "tuna fish" is redundant because tuna is a kind of fish. However, given the knowledge that "tuna" can also refer a kind of edible prickly pear , the "fish" in "tuna fish" can be seen as non-pleonastic, but rather a disambiguator between the fish and the prickly pear.
Conversely, to English speakers who do not know Spanish, there is nothing redundant about "the La Brea tar pits" because the name "La Brea" is opaque: the speaker does not know that it is Spanish for "the tar". Similarly, even though scuba stands for "self-contained underwater breathing apparatus", a phrase like "the scuba gear" would probably not be considered pleonastic because "scuba" has been reanalyzed into English as a simple adjective, and is no longer used as a noun. (Most do not even know that it is an acronym, and do not spell it SCUBA or S.C.U.B.A. See radar and laser for similar examples.)
- Double negative
- Dummy pronoun
- Redundancy (linguistics)
- Tautology (rhetoric)
- Fowler's Modern English Usage
- List of plain English words and phrases
- Politics and the English Language (George Orwell)
- Elegant variation
- Figure of speech
- Cognate object
- List of tautological place names
- Irish bull
Notes and references
- Ex p Gorely, (1864) 4 De G L & S 477.
- Partridge, Eric (1995). Usage and Abusage: A Guide to Good English. W. W. Norton & Company. ISBN 0-393-03761-4.
- Norman Swartz & Raymond Bradley (1979). Possible Worlds: an introduction to Logic and its Philosophy.
- Haegeman, L. (1991). Introduction to Government and Binding Theory. Blackwell Publishing. pp 62.
- Horn, Laurence R. Universals of Human Language, Volume I, edited by Joseph H. Greenberg, p. 176
- Wood, Jim P. (2008) “So-inversion as Polarity Focus,” in Michael Grosvald and Dianne Soares (eds.) Proceedings of the 38th Western Conference on Linguistics. Fresno, CA: University of California. Pages 304-317
- Ong, Walter J. Orality and Literacy (New Accents), p. 38 ISBN 0-415-28129-6
- McWhorter, John C. Doing Our Own Thing, p. 19. ISBN 1-59240-084-1
- Merriam-Webster definition for Tuna(1)
- Merriam-Webster definition for Tuna(2)
Read in another language
This page is available in 37 languages
- Беларуская (тарашкевіца)
- Norsk bokmål
- Norsk nynorsk
- Српски / srpski | 1 | 3 |
<urn:uuid:abfeec17-b9e9-420c-9524-a78edbcadc7c> | The Soviet Union has pressed ahead with the development and deployment of new generations of increasingly capable land, sea, and air forces for nuclear attack. Modernization of the fourth generation of intercontinental ballistic missiles (ICBMs) is essentially complete. In clear violation of the SALT II Treaty, deployment of a fifth-generation ICBM, the SS-25, has begun, and its deployment has been undertaken in a manner that violates SALT I. This highly survivable weapon system represents the world's first operationally deployed road-mobile ICBM. Development continues apace on the SS-X-24, which could be deployed in a rail-mobile version this year.
The Soviets' strategic nuclear-powered ballistic missile submarine (SSBN) force remains the largest in the world. Construction continues on several new TYPHOON-Class SSBNs. The SS-NX-23, the USSR's most capable long-range submarine-launched ballistic missile (SLBM), is nearing operational status. It is deployed on the DELTA IV and probably will be deployed on DELTA III SSBNs.
The USSR currently has three manned intercontinental-capable bombers in development and production - the BEAR H, the BLACKJACK, and the BACKFIRE. Newly built BEAR H bombers are the first launch platform for the long-range AS-15 air-launched cruise missile (ALCM).
Projections for the years ahead are:
- Additional TYPHOON-Class submarines, BLACKJACK and BEAR H bombers,and SS-X-24 ICBMs, all carrying many more warheads than the systems they are replacing, will be deployed.
- By 1990, if the Soviets continue to maintain over 2,500 missile launchers and heavy bombers and even if they are within the quantitative sublimits of SALT II, the number of deployed warheads will grow to over 12,000.
- Although the Soviets would not necessarily expand their intercontinental attack forces beyond some 12,000 to 13,000 warheads, they clearly have the capability to do so. Based on recent trends, even under SALT, the Soviets could deploy over 15,000 warheads, or by violating SALT, over 20,000 warheads by the mid-199Os.
The modernization and upgrading of these strategic forces have been paralleled by growth and increased capabilities of the Soviets' longer range intermediate-range nuclear force (LRINF) and short-range ballistic missile (SRBM) systems deployed with Soviet combat forces. Significant improvements in nuclear capable aircraft as well as increases in tactical missiles and nuclear artillery have also occurred.
Soviet leaders since the 1960s have followed a consistent and relentless policy for the development of forces for nuclear attack. The Soviet leadership recognizes the catastrophic consequences of a general nuclear war. However, Soviet military forces have taken actions and exhibited behavior which indicate that they believe a nuclear war could be fought and won at levels below general nuclear war. The grand strategy of the USSR is to attain its objectives, if possible, by means short of war by exploiting the coercive leverage inherent in superior forces, particularly nuclear forces, to instill fear, to erode the West's collective security arrangements, and to support subversion. Thus, the primary role of Soviet military power is to provide the essential underpinning for the step-by-step extension of Soviet influence and control.
In any nuclear war, Soviet strategy would be to destroy enemy nuclear forces before launch or in flight to their targets, to reconstitute the war base should nuclear weapons reach the Soviet homeland, and to support and sustain combined arms combat in different theaters of military operations. Several overarching strategic wartime missions are:
- to eliminate enemy nuclear-capable forces and related command, control, and communications capabilities;
- to seize and occupy vital areas on the Eurasian landmass; and
- to defend the Soviet state against attack.
These missions would involve:
- disruption and destruction of the enemy's essential command, control, and communications capabilities;
- destruction or neutralization of enemy nuclear forces on the ground or at sea before they could be launched; and
- protection of the Soviet leadership and cadres, military forces, and military and economic assets necessary to sustain the war.
Strategic and theater forces and programs in place or under active development designed to accomplish these objectives include:
- hard-target-capable ICBMs, new submarine-launched ballistic missiles, LRINF ballistic missiles, and land- and sea-based cruise missiles;
- short-range ballistic missiles (SRBMs)and free rocket over ground (FROG) systems deployed with combat troops;
- bombers and ALCMs designed to penetrate US and allied defensive systems;
- large numbers of land attack and antiship cruise missiles on various platforms; antisubmarine warfare (ASW) forces to attack Western nuclear-powered ballistic missile submarines;
- air and missile defenses, including early warning satellites and radars, interceptor aircraft, surface-to-air missiles (SAMs), antiballistic missile (ABM) radars and interceptors, and some antiaircraft artillery;
- antisatellite weapons;
- passive defense forces, including civil defense forces and countermeasures troops and equipment devoted to confusing incoming aircraft; and
- hardened facilities numbering in the thousands, command vehicles, and evacuation plans designed to protect Party, military, governmental and industrial staffs, essential workers, and to the extent possible the general population.
Supporting a land war in Eurasia and eliminating the US capacity to fight and support a conflict would require the capability to employ theater and strategic forces over a variety of ranges and the destruction of:
- military-associated command and control facilities and other assets;
- war-supporting industries, arsenals, and major military facilities;
- ports and airfields in the United States and along air and sea routes to European and Asian theaters of war; and
- satellite surveillance sensors, ground-based surveillance sensors, and related communications facilities.
Soviet nuclear forces are designed and personnel are trained to fulfill their missions under all circumstances. Soviet leaders appear to believe that nuclear war might last weeks or even months and have factored this possibility into their force planning. Despite public rhetoric alleging their commitment to no first-use of nuclear weapons, the Soviets have developed extensive plans either to preempt a nuclear attack or to launch a massive first strike.
The key to a successful preemptive attack would be effective coordination of the strike and accurate intelligence on enemy intentions. Meeting these demands in war requires reliable command, control, and communications under all conditions.
A launch-under-attack circumstance would place great stress on attack warning systems and launch coordination. To meet the demands of a launch-under-attack contingency, the Soviets have established an elaborate warning system. Satellite, over-the-horizon radar, and early warning systems have been built to provide the Soviet Union with the capability to assess accurately and respond effectively to any nuclear attack. These warning systems could give the Soviets time to launch their nuclear forces very quickly.
Follow-on strikes would require the survival of the command, control, and communications systems as well as the weapons themselves. The Soviets have invested heavily in providing this survivability. The SS-17, SS-18, and SS-19 ICBMs are housed in the world's hardest operational silos. The Soviets are building silos for the new ABM interceptors around Moscow. To increase its survivability, the SS-20 LRINF missile is mobile. The mobile SS-26 ICBM is being deployed; the development of the mobile SS-X-24 continues; and a mobile surface-to-air missile, the SA-X-12, with some capabilities against certain types of ballistic missiles, is almost operational. The launch-control facilities for offensive missiles are housed in very hard silos or on off-road vehicles. Communications are redundant and hardened against both blast and electro-magnetic pulse damage. Higher commands have multiple mobile alternate command posts available for their use, including land vehicles, trains, aircraft, and ships. Bombers are assigned dispersal airfields. Ballistic missile submarines could be hidden in caves, submerged in deep fjords just off their piers, or dispersed while being protected by Soviet surface and submarine forces.
The belief that a nuclear war might be protracted has led to the USSR's emphasis on nuclear weapon system survivability and sustainability. For their ICBM, LRINF, SRBM, SLBM, and air defense forces, the Soviets have stocked extra missiles, propellants, and warheads throughout the USSR. Some ICBM silo launchers could be reloaded, and provisions have been made for the decontamination of those launchers. Plans for the survival of necessary equipment and personnel have been developed and practiced. Resupply systems are available to reload SSBNs in protected waters.
The operational Soviet ICBM force consists of some 1,400 silo and mobile launchers, aside from those at test sites. Some 818 of the silo launchers have been rebuilt since 1972; nearly half of these silos have been refurbished since 1979. All 818 silos have been hardened against attack by currently operational US ICBMs. These silos contain the SS-17 Mod 3 (160 silos), the SS-18 Mod 4 (308), and the SS-19 Mod 3(360), which were the world's most modern deployed ICBMs until the more modern, mobile SS-26 was deployed.
Each SS-18 and SS-19 ICBM can carry more and larger MIRVs than the Minuteman III, the most modern deployed US ICBM. The SS-18 Mod 4 carries at least ten MIRVs, and the SS-19 Mod 3 carries six, whereas the Minuteman III carries only three. The SS-18 Mod 4 was specifically designed to attack and destroy ICBMs and other hardened targets in the US. The SS-18 Mod 4 force currently deployed has the capability to destroy about 66 to 80 percent of US ICBM silos, using two nuclear warheads against each. Even after this type of attack, over 1,000 SS-18 warheads would be available for further attacks against targets in the US. The SS-19 Mod 3 ICBM, while not identical to the SS-18 in accuracy, has similar capabilities. It could be assigned similar missions and could be used against targets in Eurasia. Although the SS-17 is somewhat less capable than the SS-19, it has similar targeting flexibility.
The remaining Soviet ICBM silos are fitted primarily with the SS-11 Mod 2/3s and SS-18 Mod 2s. These ICBMs of older vintage are housed in less-survivable silos and are considerably less capable. Nevertheless, their destructive potential against softer area targets in the United States and Eurasia is significant in terms of many of the Soviet requirements outlined earlier.
The most recent development in the Soviets' operational ICBM force occurred with the deployment of their road-mobile SS-26 missile, in violation of SALT I and SALT II. The SS-26 is approximately the same size as the US Minuteman ICBM. It carries a single reentry vehicle and is being deployed in a road-mobile configuration similar to that of the SS-20. As such, it will be highly survivable with an inherent refire capability. Several bases for the SS-26 are operational, with a total of over 70 launchers deployed. They consist of launcher garages equipped with sliding roofs and several support buildings to house the requisite mobile support equipment.
Within the past year, the Soviets have begun dismantling SS-11 silos in compensation for SS-26 deployments. The Soviets are expected to continue to dismantle SS-11 silos. By the mid 1990s, all SS-11s will probably be deactivated.
Deployment programs for all of the currently operational silo-based Soviet ICBMs are essentially complete. The command, control, and communications system that supports the Soviet ICBM force is modern and highly survivable, and the reliability of the ICBMs themselves is regularly tested by live firings from operational complexes.
Some silo-based ICBMs in the current force that the Soviets decide not to replace with modified or new ICBMs will, in accord with past practice, be refurbished to increase their useful lifetime and reliability. During this process some system modifications also could be made.
Force Developments. Soviet research and development on ICBMs is a dynamic process involving many programs. A modernized version or a new replacement for the liquid-propelled SS-18 is likely to be produced and deployed in existing silos through the end of the century.
The Soviets appear to be planning on new solid-propellant ICBMs to meet many future mission requirements, including a counterforce capability. The Soviets already have two new solid-propellant ICBMs - the small, mobile SS-25 described above, now being deployed, and the SS-X-24. The medium-size SS-X-24 is well along in its flight test program. The SS-X-24 deployment in a rail-mobile mode could begin as early as late 1986. Silo-based deployment could occur later. Early preparations for the deployment of the SS-X-24 are already underway.
Activity at the Soviet ICBM test ranges indicates that two additional new ICBMs are underdevelopment. A new ICBM to replace the SS-18 is nearing the flight test stage of development. Additionally, a solid-propellant missile that may be larger than the SS-X-24 will begin flight-testing in the next few years. Both of these missiles are likely to have better accuracy and greater throwweight potential than their predecessors. A third possible development is that a MIRVed version of the SS-25 will be developed later this decade. Such a development would further expand the already large warhead inventory possessed by the Soviets. By the mid-199Os, the Soviet ICBM force will have been almost entirely replaced with new systems, a number of which may violate SALT II constraints.
The Soviets maintain the world's largest ballistic missile submarine force. As of early 1986, the force numbered 62 modern SSBNs carrying 944 SALT-accountable nuclear-tipped missiles. Neither total includes the older 13 older GOLF II SSBs with 39 missiles which are currently assigned theater missions. The GOLF III SSB and HOTEL III SSBN are only SALT-accountable for their missile tubes. Twenty SSBNs are fitted with 336 MIRVed submarine-launched ballistic missiles (SLBMs). These twenty units have been built and deployed within the past nine years. Two-thirds of the ballistic missile submarines are fitted with long-range SLBMs, enabling them to patrol in waters close to the Soviet Union. This affords protection from NATO antisubmarine warfare operations. Moreover, the long-range missiles allow the Soviets to fire from home ports and still strike targets in the United States.
Four units of the modern Soviet ballistic missile submarine, the TYPHOON, have already been built. Each TYPHOON carries 20 SS-N-20 solid-propellant MIRVed SLBMs. The TYPHOON is the world's largest submarine with a displacement a third greater than that of the US Ohio-Class. It can operate under the Arctic Ocean icecap, adding further to the protection afforded by the 8,300-kilometer range of its SS-N-20 SLBMs. Three or four additional TYPHOONs are probably now under construction, and by the early 1990s the Soviets could have as many as eight of these potent weapons systems in their operational force.
In accordance with the SALT I Interim Agreement, the Soviets have, since 1978, removed 14 YANKEE I units from service as ballistic missile submarines. These units had to be removed as newer submarines were produced in order for the overall Soviet SSBN force to stay within the 62 modern SSBN/950 SLBM limits established in 1972. These YANKEEs, however, have not been scrapped. Some have been reconfigured as attack or long-range cruise missile submarines.
Force Developments. The Soviets have launched three units-two of which are currently accountable under SALT - of a new class of SSBN, the DELTA IV, which will be fitted with the SS-NX-23 SLBM, now being flight tested. This large, liquid-propelled SLBM will have greater throwweight, carry more warheads, and be more accurate than the SS-N-18 which is currently carried on the DELTA III SSBN. The SS-NX-23 is likely to be deployed on DELTA IIIs as a replacement for the SS-N-18.
The Soviets probably will begin flight-testing a modified version of the SS-N-20. Additionally, based on past Soviet practice, they probably will develop a modified version of the SS-NX-23 before the end of the decade. Both modified versions of the SS-N-20 and SS-NX-23 are likely to be more accurate than their predecessors and eventually may provide the Soviets with a hard-target capability for SLBMs.
To ensure communications reliability, the Soviets are expected to deploy an extremely low frequency (ELF) communications system that will enable them to contact SSBNs under most operating conditions.
The five air armies subordinate to the Supreme High Command (VGK) which contain the Soviet strategic bombers and strike aircraft are:
- Smolensk Air Army;
- Legnica Air Army;
- Venitza Air Army;
- Irkutsk Air Army; and
- Moscow Air Army.
The assets of the air armies include some 180 BEAR and BISON bombers, 145 BACKFIRE bombers, 397 medium-range BLINDER and BADGER bombers, and 450 shorter range FENCER strike aircraft. The Soviets have allocated these aircraft among five air armies to cover specific theaters of military operations (Europe, Asia, and the United States) and yet retain the flexibility to reallocate aircraft as necessary during wartime. This flexibility allows the Soviets to alter the use of their strategic air assets as circumstances require. Soviet Naval Aviation assets include some 125 BACKFIRE and 230 BLINDER and BADGER bombers. Air army BEAR and BISON bombers also could be made available for maritime missions. In addition, the air armies and Soviet Naval Aviation have a total of some 530 tanker, reconnaissance, and electronic warfare aircraft.
The Soviets are in the process of upgrading their long-range bomber force. The new
BEAR H bomber, which carries the AS-15 long
range cruise missile, became operational in
1984. About 40 of these aircraft are now in
the inventory. BEAR H bombers have been
observed in training flights simulating attacks
against the North American continent.
The BEAR H is the first new production of
a strike version of the BEAR airframe in over
15 years. Additionally, the Soviets are reconfiguring older BEAR aircraft, which carry the
subsonic AS-3 air-to-surface missile (ASM), to
carry the newer supersonic AS-4. Several of
these reconfigurations, known as BEAR Gs, are
The BEAR H is the first new production of a strike version of the BEAR airframe in over 15 years. Additionally, the Soviets are reconfiguring older BEAR aircraft, which carry the subsonic AS-3 air-to-surface missile (ASM), to carry the newer supersonic AS-4. Several of these reconfigurations, known as BEAR Gs, are operational.
The Soviets have been producing the BACKFIRE, their most modern operational bomber, at a rate of about 30 per year. Several modifications have been made to the aircraft and further modifications are likely to upgrade performance. The BACKFIRE can perform a variety of missions including nuclear strike, conventional attack, antiship strikes, and reconnaissance. Its low-altitude capabilities make it a formidable platform for high speed military operations. Additionally, the BACKFIRE can be equipped with a probe to permit in-flight refueling to increase its range. This would improve its capabilities against the contiguous United States.
The Soviets have assigned some FENCER strike aircraft to the air armies. The FENCER is a supersonic, variable-geometry-wing, all weather fighter-bomber that has been in operation since 1974. Four variants have been produced, the most recent introduced in 1983. The FENCER is still in production, and the number assigned to air armies is likely to increase over the next few years.
Force Developments. The BLACKJACK, a new long-range bomber larger than the US B-1B, is still undergoing flight-testing. The BLACKJACK will be faster than the US B-1B and may have about the same combat radius. The new bomber will be capable of carrying cruise missiles, bombs, or a combination of both and could be operational as early as 1988. It probably will be used first to replace the much less capable BEAR A bomber and then the BEAR G bomber.
For several years the Soviet Union has been developing the MIDAS, an aerial-refueling tanker version of the Il-76/CANDID aircraft. When deployed in the near future, the new tanker can be used to support tactical and strategic operations and will expand significantly the ability of the Soviets to conduct longer range missions.
The Soviets have a sea-launched version and a ground-launched version of the AS-15 under development. The sea-launched variant,the SS-NX-21, is small enough to be fired from standard Soviet torpedo tubes. Possible launch platforms for the SS-NX-21 include three VICTOR classes of nuclear-powered attack submarines (SSNs); the reconfigured YANKEE-Class SSN; and the new AKULA-, MIKE-, and SIERRA-Class SSNs. The SS-NX-21 is expected to become operational soon and could be deployed on submarines off US and allied coasts.
The ground-launched cruise missile variant, the SSC-X-4, will probably become operational this year. Its mission will be to support operations in the Eurasian theater since the Soviets are unlikely to deploy it outside the USSR and its range is too short for intercontinental strikes. The SSC-X-4 is being developed as a mobile system and probably will follow operational procedures similar to the SS-20 LRINF system.
In addition to these variants of the AS-15, a larger cruise missile is under development. This missile, designated the SS-NX-24, will be flight-tested from a specially converted YANKEE-Class nuclear-powered cruise missile attack submarine (SSGN). It could become operational by 1987. A ground-based version of this missile may be developed.
All of these cruise missiles probably will be equipped with nuclear warheads when first deployed and will be capable of attacking hardened targets. These systems could be accurate enough to permit the use of conventional warheads, depending on munitions developments and the types of guidance systems incorporated in their designs. With such warheads and guidance, cruise missiles would pose a significant non-nuclear threat to US and Eurasian airfields and nuclear weapons.
In measuring and evaluating the continuing improvements being made by the USSR's strategic forces, it is useful to bear in mind the status of US forces, the modernization of which is discussed in Chapter VIII. By mid-1986, US strategic deterrent forces will include:
- 1,000 Minuteman ICBMs;
- 17 Titan ICBMs (the Titan force will be retired by the end of 1987);
- 240 B-52G/H model bombers plus about aircraft undergoing maintenance and modification;
- 56 FB-111 bombers plus some 5 aircraft undergoing maintenance and modification;
- 17 B-1B bombers;
- 480 Poseidon (C-3 and C-4) fleet ballistic missile launchers; and
- 168 Trident fleet ballistic missile launchers.
The historic and continuing objective of US nuclear forces is deterrence of nuclear and major conventional aggression against the United States and its allies. This policy has preserved peace for a quarter-century and, in sharp contrast to the Soviet priority accorded nuclear warfighting, is based on the conviction, widely held in the US, that there could be no winners in a nuclear conflict. The United States does not now have a first-strike policy, nor do we plan to acquire a first-strike capability in the future. Rather, US deterrence policy seeks to maintain the situation in which any potential aggressor sees little to gain and much to lose by initiating hostilities against the UnitedStates or its allies. In turn, the maintenance of peace through deterrence provides the vital opportunity to pursue the US goal of eliminating nuclear weapons from the arsenals of all states.
Realizing these deterrence objectives requires the development, deployment, and maintenance of strategic forces whose size and characteristics clearly indicate to an opponent that his politico-military objectives cannot be achieved either through the employment of nuclear weapons or through political coercion based on nuclear advantages.
The Soviets began a vigorous effort to modernize and expand their intermediate-range nuclear force in 1977 with the deployment of the first SS-20 LRINF missiles. Each SS-20 is equipped with three MIRVs, more than doubling the number of LRINF warheads that existed in 1977 when the SS-20 was first deployed. The SS-20s also have significantly greater range and accuracy and a much shorter reaction time than the missiles they are replacing.
The Soviets have deployed 441 SS-20 launchers at bases west of the Urals and in the Soviet Far East. During 1984, the Soviets began construction of more new bases for the SS-20 than in any other year. Some of this construction was to facilitate the units that had been displaced by their former bases. (These bases are being converted to accommodate the SS-25 mobile ICBM) In spite of some conversions, real growth was observed in the SS-20 force in 1985.
The mobility of the SS-20 system, unlike the SS-4, allows it to operate under both on- and
off-road conditions. Consequently, the survivability of the SS-20 is greatly enhanced because of the difficulty in detecting and targeting this system when it is field deployed. Further, the SS-20 launcher can be reloaded and refired, and the Soviets stockpile refire missiles.
In addition to the SS-20s, the Soviets still maintain approximately 112 SS-4 LRINF missiles, all of which are located in the western USSR opposite European NATO.
Future Force Development. The Soviets are flight-testing an improved version of the SS-20 which is expected to be more accurate than its predecessor.
Each front commander also may have a brigade of 12 to 18 SCALEBOARD missiles available that are more accurate than the older missiles they replaced. Over 70 SCALEBOARD launchers are opposite European NATO and 40 are opposite the Sino-Soviet border. There is a battalion opposite southwest Asia/eastern Turkey, and one brigade is maintained in strategic reserve. Because of their greatly increased accuracy, the new short-range missiles can also be employed effectively with nonnuclear warheads.
In 1984, the Soviets forward-deployed the SCALEBOARD short-range ballistic missile to Eastern Europe. These front-level weapons, which normally accompany Soviet combined arms formations, are now in position to strike deep into Western Europe.
The Soviets also maintain and operate 13 GOLF II-Class ballistic missile submarines equipped with 3 SS-N-5 SLBMs each. Six GOLF Its are based in the Baltic, where they pose a threat to most of Europe, while the remaining seven patrol the Sea of Japan, where they can be employed against targets in the Far East.
Current Systems and Force Levels. Soviet armies and fronts have missile brigades equipped with 12 to 18 SS-1C SCUD SRBMs. Over 500 SCUD launchers are located opposite European NATO, and over 100 are opposite the Sino-Soviet border and in the Far East. Additionally, about 75 are opposite southwest Asia and eastern Turkey, with one brigade held in strategic reserve.
The Soviet division commander has a variety of nuclear assets available to him. The most predominant such system at division level is the unguided free rocket over ground (FROG), which is deployed in a battalion of four launchers. The Soviets are replacing FROGs with the more accurate, longer range SS-21s in some divisions opposite NATO. There are now 500 FROG and SS-21 launchers opposite NATO. Another 215 FROG launchers are opposite the Sino-Soviet border and in the Far East; about 100 are opposite southwest Asia and eastern Turkey; and about 75 are in strategic reserve.
Front commanders also have available nuclear-capable artillery tubes. Three new self-propelled, nuclear-capable artillery pieces are being added to the inventory: a 152-mm gun, a 203-mm self-propelled gun, and a 240-mm self-propelled mortar. When fully deployed, the total number of these new nuclear-capable artillery tubes plus older 152-mm howitzers that are also capable of firing nuclear rounds will exceed 10,000.
Force Developments. As in all other nuclear attack forces, the Soviets probably will continue to seek ways to improve the capabilities of their tactical missiles and nuclear artillery. These improvements will be accomplished through incremental modernization of existing systems as well as through the introduction of entirely new systems.
The Soviets probably will continue to seek improvements for their short-range ballistic missile force. Advancements in warhead capabilities, accuracy, and reliability are expected. Combined arms commanders would then have enhanced non-nuclear targeting options and more flexible and survivable SRBMs. These systems will be capable of delivering nuclear, chemical, or conventional warheads closer to the forward edge of the battle area and at greater depths within the military theater of operations.
The initial deployment of Pershing IIs and ground-launched cruise missiles (GLCMs) began in Europe in late 1983. According to the agreed schedule, the number of US LRINF missiles deployed in Europe on 31 December 1985 totaled 236 missiles on 140 launchers.These consist of 108 Pershing II missiles on 108 launchers and 128 GLCMs on 32 launchers. The deployment of US Pershing II and ground-launched cruise missiles responds to the Soviet LRINF missile threat to NATO.
With the removal of US Pershing Is and the Soviet SS-23s replacing SCUDs in Europe, the Soviet Union will maintain its substantial numerical superiority in shorter range nonstrategic nuclear missiles while improving the qualitative characteristics of its forces. The USSR also has a significant numerical advantage in SRINF aircraft and is reducing the qualitative advantage NATO has enjoyed. This is occurring despite NATO's SRINF aircraft modernization program, in which older aircraft are being replaced by the F-16 and Tornado.
Short-range nuclear forces (SNF) consist of tube artillery and missiles of much shorter range than INF. The United States' SNF is made up of Lance tactical missiles and nuclear artillery. Although SNF artillery traditionally has been an area of NATO advantage, the balance has shifted dramatically in favor of the Soviets in recent years. The Soviets also have achieved parity in overall numbers of SNF missiles. | 1 | 3 |
<urn:uuid:025df727-b482-47b4-affc-c4fd30bea3f9> | Black Box Explains...50-micron vs. 62.5-micron fiber optic cable.
As todays networks expand, the demand for more bandwidth and greater distances increases. Gigabit Ethernet and the emerging 10 Gigabit Ethernet are becoming the applications of choice for current... more/see it nowand future networking needs. Thus, there is a renewed interest in 50-micron fiber optic cable.
First used in 1976, 50-micron cable has not experienced the widespread use in North America that 62.5-micron cable has.
To support campus backbones and horizontal runs over 10-Mbps Ethernet, 62.5 fiber, introduced in 1986, was and still is the predominant fiber optic cable because it offers high bandwidth and long distance.
One reason 50-micron cable did not gain widespread use was because of the light source. Both 62.5 and 50-micron fiber cable can use either LED or laser light sources. But in the 1980s and 1990s, LED light sources were common. Since 50-micron cable has a smaller aperture, the lower power of the LED light source caused a reduction in the power budget compared to 62.5-micron cable—thus, the migration to 62.5-micron cable. At that time, laser light sources were not highly developed and were rarely used with 50-micron cable—mostly in research and technological applications.
The cables share many characteristics. Although 50-micron fiber cable features a smaller core, which is the light-carrying portion of the fiber, both 50- and 62.5-micron cable use the same glass cladding diameter of 125 microns. Because they have the same outer diameter, theyre equally strong and are handled in the same way. In addition, both types of cable are included in the TIA/EIA 568-B.3 standards for structured cabling and connectivity.
As with 62.5-micron cable, you can use 50-micron fiber in all types of applications: Ethernet, FDDI, 155-Mbps ATM, Token Ring, Fast Ethernet, and Gigabit Ethernet. It is recommended for all premise applications: backbone, horizontal, and intrabuilding connections, and it should be considered especially for any new construction and installations. IT managers looking at the possibility of 10 Gigabit Ethernet and future scalability will get what they need with 50-micron cable.
The big difference between 50-micron and 62.5-micron cable is in bandwidth. The smaller 50-micron core provides a higher 850-nm bandwidth, making it ideal for inter/intrabuilding connections. 50-micron cable features three times the bandwidth of standard 62.5-micron cable.
At 850-nm, 50-micron cable is rated at 500 MHz/km over 500 meters versus 160 MHz/km for 62.5-micron cable over 220 meters.
Fiber Type: 62.5/125 µm
Minimum Bandwidth (MHz-km): 160/500
Distance at 850 nm: 220 m
Distance at 1310 nm: 500 m
Fiber Type: 50/125 µm
Minimum Bandwidth (MHz-km): 500/500
Distance at 850 nm: 500 m
Distance at 1310 nm: 500 m
As we move towards Gigabit Ethernet, the 850-nm wavelength is gaining importance along with the development of improved laser technology. Today, a lower-cost 850-nm laser, the Vertical-Cavity Surface-Emitting Laser (VCSEL), is becoming more available for networking. This is particularly important because Gigabit Ethernet specifies a laser light source.
Other differences between the two types of cable include distance and speed. The bandwidth an application needs depends on the data transmission rate. Usually, data rates are inversely proportional to distance. As the data rate (MHz) goes up, the distance that rate can be sustained goes down. So a higher fiber bandwidth enables you to transmit at a faster rate or for longer distances. In short, 50-micron cable provides longer link lengths and/or higher speeds in the 850-nm wavelength. For example, the proposed link length for 50-micron cable is 500 meters in contrast with 220 meters for 62.5-micron cable.
Standards now exist that cover the migration of 10-Mbps to 100-Mbps or 1 Gigabit Ethernet at the 850-nm wavelength. The most logical solution for upgrades lies in the connectivity hardware. The easiest way to connect the two types of fiber in a network is through a switch or other networking box. It is not recommended to connect the two types of fiber directly. collapse
- Pdf Drawing...
10-Gigabit Multimode, 50-Micron FO Patch Cable, Zipcord, PVC, LC%X96SC PDF Drawing
PDF Drawing for EFNT010-SCLC Series (Version 1)
Fiber optic cable construction and types.
Multimode vs. single-mode
Multimode cable has a large-diameter core and multiple pathways of light. It is most commonly available in two core sizes: 50-micron and 62.5-micron.
Multimode fiber optic cable can... more/see it nowbe used for most general data and voice fiber applications such as adding segments to an existing network, and in smaller applications such as alarm systems and bringing fiber to the desktop. Both multimode cable cores use either LED or laser light sources.
Multimode 50-micron cable is recommended for premise applications?(backbone, horizontal, and intrabuilding connections). It should be considered for any new construction and for installations because it provides longer link lengths and/or higher speeds, particularly in the 850-nm wavelength, than 62.5-micron cable does.
Multimode cable commonly has an orange or aqua jacket; single-mode has yellow. Other colors are available for various applications and for identification purposes.
Single-mode cable has a small (8–10-micron) glass core and only one pathway of light. With only a single wavelength of light passing through its core, single-mode cable realigns the light toward the center of the core instead of simply bouncing it off the edge of the core as multimode does.
Single-mode cable provides 50 times more distance than multimode cable does. Consequently, single-mode cable is typically used in high-bandwidth applications and in long-haul network connections spread out over extended areas, including cable television and campus backbone applications. Telcos use it for connections between switching offices. Single-mode cable also provides higher bandwidth, so you can use a pair of single-mode fiber strands full-duplex at more than twice the throughput of multimode fiber.
Fiber optic cable consists of a core, cladding, coating, buffer strengthening fibers, and cable jacket.
The core is the physical medium that transports optical data signals from an attached light source to a receiving device. It is a single continuous strand of glass or plastic that’s measured (in microns) by the size of its outer diameter.
All fiber optic cable is sized according to its core’s outer diameter. The two multimode sizes most commonly available are 50 and 62.5 microns. Single-mode cores are generally less than 9 microns.
The cladding is a thin layer that surrounds the fiber core and serves as a boundary that contains the light waves and causes the refraction, enabling data to travel throughout the length of the fiber segment.
The coating is a layer of plastic that surrounds the core and cladding to reinforce the fiber core, help absorb shocks, and provide extra protection against excessive cable bends. These coatings are measured in microns (µ); the coating is 250µ and the buffer is 900µ.
Strengthening fibers help protect the core against crushing forces and excessive tension during installation. This material is generally Kevlar® yarn strands within the cable jacket.
The cable jacket is the outer layer of any cable. Most fiber optic cables have an orange jacket, although some types can have black, yellow, aqua or other color jackets. Various colors can be used to designate different applications within a network.
Simplex vs. duplex patch cables
Multimode and single-mode patch cables can be simplex or duplex.
Simplex has one fiber, while duplex zipcord has two fibers joined with a thin web. Simplex (also known as single strand) and duplex zipcord cables are tight-buffered and jacketed, with Kevlar strength members.
Because simplex fiber optic cable consists of only one fiber link, you should use it for applications that only require one-way data transfer. For instance, an interstate trucking scale that sends the weight of the truck to a monitoring station or an oil line monitor that sends data about oil flow to a central location.
Use duplex multimode or single-mode fiber optic cable for applications that require simultaneous, bidirectional data transfer. Workstations, fiber switches and servers, Ethernet switches, backbone ports, and similar hardware require duplex cable.
PVC (riser) vs. plenum-rated
PVC cable (also called riser-rated cable even though not all PVC cable is riser-rated) features an outer polyvinyl chloride jacket that gives off toxic fumes when it burns. It can be used for horizontal and vertical runs, but only if the building features a contained ventilation system. Plenum can replace PVC, but PVC cannot be used in plenum spaces.
“Riser-rated” means that the jacket is fire-resistant. However, it can still give off noxious fumes when overheated. The cable carries an OFNR rating and is not for use in plenums.
Plenum-jacketed cables have FEP, such as Teflon®, which emits less toxic fumes when it burns. A plenum is a space within the building designed for the movement of environmental air. In most office buildings, the space above the ceiling is used for the HVAC air return. If cable goes through that space, it must be “plenum-rated.”
Distribution-style vs. breakout-style
Distribution-style cables have several tight-buffered fibers bundled under the same jacket with Kevlar or fiberglass rod reinforcement. These cables are small in size and are typically used within a building for short, dry conduit runs, in either riser or plenum applications. The fibers can be directly terminated, but because the fibers are not individually reinforced, these cables need to be terminated inside a patch panel, junction box, fiber enclosure, or cabinet.
Breakout-style cables are made of several simplex cables bundled together, making a strong design that is larger than distribution cables. Breakout cables are suitable for riser and plenum applications.
Loose-tube vs. tight-buffered
Both loose-tube and tight-buffered cables contain some type of strengthening member, such as aramid yarn, stainless steel wire strands, or even gel-filled sleeves. But each is designed for very different environments.
Loose-tube cable is specifically designed for harsh outdoor environments. It protects the fiber core, cladding, and coating by enclosing everything within semi-rigid protective sleeves or tubes. Many loose-tube cables also have a water-resistant gel that surrounds the fibers. This gel helps protect them from moisture, so the cables are great for harsh, high-humidity environments where water or condensation can be a problem. The gel-filled tubes can also expand and contract with temperature changes. Gel-filled loose-tube cable is not the best choice for indoor applications.
Tight-buffered cable, in contrast, is optimized for indoor applications. Because it’s sturdier than loose-tube cable, it’s best suited for moderate-length LAN/WAN connections, or long indoor runs. It’s easier to install as well, because there’s no messy gel to clean up and it doesn’t require a fan-out kit for splicing or termination.
Indoor/outdoor cable uses dry-block technology to seal ruptures against moisture seepage and gel-filled buffer tubes to halt moisture migration. Comprised of a ripcord, core binder, a flame-retardant layer, overcoat, aramid yarn, and an outer jacket, it is designed for aerial, duct, tray, and riser applications.
Interlocking armored cable
This fiber cable is jacketed in aluminum interlocking armor so it can be run just about anywhere in a building. Ideal for harsh environments, it is rugged and rodent resistant. No conduit is needed, so it’s a labor- and money-saving alternative to using innerducts for fiber cable runs.
Outside-plant cable is used in direct burials. It delivers optimum performance in extreme conditions and is terminated within 50 feet of a building entrance. It blocks water and is rodent-resistant.
Interlocking armored cable is lightweight and flexible but also extraordinarily strong. It is ideal for out-of-the-way premise links.
Laser-optimized 10-Gigabit cable
Laser-optimized multimode fiber cable assemblies differ from standard multimode cable assemblies because they have graded refractive index profile fiber optic cable in each assembly. This means that the refractive index of the core glass decreases toward the outer cladding, so the paths of light towards the outer edge of the fiber travel quicker than the other paths. This increase in speed equalizes the travel time for both short and long light paths, ensuring accurate information transmission and receipt over much greater distances, up to 300 meters at 10 Gbps.
Laser-optimized multimode fiber cable is ideal for premise networking applications that include long distances. It is usually aqua colored.
- Pdf Drawing...
10-Gigabit Multimode, 50-Micron Fiber Optic Patch Cable, Zipcord, PVC, SC%X96SC PDF Drawing
PDF Drawing EFNT010-SCSC Series (Version 1)
Black Box Explains...10-Gigabit Ethernet.
Product Data Sheets (pdf)...10-Gigabit Multimode, 50-Micron Fiber Optic Bulk Cables
10-Gigabit Ethernet, sometimes called 10-GbE or 10 GigE, is the latest improvement on the Ethernet standard, ratified in 2003 for fiber as the 802.3ae standard, in 2004 for twinax cable... more/see it now
as the 802.3ak standard, and in 2006 for UTP as the 802.3an standard.
10-Gigabit Ethernet offers ten times the speed of Gigabit Ethernet. This extraordinary throughput plus compatibility with existing Ethernet standards has resulted in 10-Gigabit Ethernet quickly becoming the new standard for high-speed network backbones, largely supplanting older technologies such as ATM over SONET. 10-Gigabit Ethernet has even made inroads in the area of storage area networks (SAN) where Fibre Channel has long been the dominant standard. This new Ethernet standard offers a fast, simple, relatively inexpensive way to incorporate super high-speed links into your network.
Because 10-Gigabit Ethernet is simply an extension of the existing Ethernet standards family, it’s a true Ethernet standard—it’s totally backwards compatible and retains full compatibility with 10-/100-/1000-Mbps Ethernet. It has no impact on existing Ethernet nodes, enabling you to seamlessly upgrade your network with straightforward upgrade paths and scalability.
10-Gigabit Ethernet is less costly to install than older high-speed standards such as ATM.
And not only is it relatively inexpensive to install, but the cost of network maintenance and management also stays low—10-Gigabit Ethernet can easily be managed by local network administrators.
10-Gigabit Ethernet is also more efficient than other high-speed standards. Because it uses the same Ethernet frames as earlier Ethernet standards, it can be integrated into your network using switches rather than routers. Packets don’t need to be fragmented, reassembled, or translated for data to get through.
Unlike earlier Ethernet standards, which operate in half- or full-duplex, 10-Gigabit Ethernet operates in full-duplex only, eliminating collisions and abandoning the CSMA/CD protocol used to negotiate half-duplex links. It maintains MAC frame compatibility with earlier Ethernet standards with 64- to 1518-byte frame lengths. The 10-Gigabit standard does not support jumbo frames, although there are proprietary methods for accommodating them.
Fiber 10-Gigabit Ethernet standards
There are two groups of physical-layer (PHY) 10-Gigabit Ethernet standards for fiber:
LAN-PHY and WAN-PHY.
LAN-PHY is the most common group of standards. It’s used for simple switch and router
connections over privately owned fiber and uses a line rate of 10.3125 Gbps with 64B/66B
The other group of 10-Gigabit Ethernet standards, WAN-PHY, is used with SONET/SDH
interfaces for wide area networking across cities, states—even internationally.
10GBASE-SR (Short-Range) is a serial short-range fiber standard that operates over two multimode fibers. It has a range of 26 to 82 meters (85 to 269 ft.) over legacy 62.5-µm 850-nm fiber and up to 300 meters (984 ft.) over 50-µm 850-nm fiber.
10GBASE-LR (Long-Range) is a serial long-range 10-Gbps Ethernet standard that operates at ranges of up to 25 kilometers (15.5 mi.) on two 1310-nm single-mode fibers.
10GBASE-ER (Extended-Range) is similar to 10GBASE-LR but supports distances up to 40 kilometers (24.9 mi.) over two 1550-nm single-mode fibers.
10GBASE-LX4 uses Coarse-Wavelength Division Multiplexing (CWDM) to achieve ranges of 300 meters (984 ft.) over two legacy 850-nm multimode fibers or up to 10 kilometers (6.2 mi.) over two 1310-nm single-mode fibers. This standard multiplexes four data streams over four different wavelengths in the range of 1300 nm. Each wavelength carries 3.125 Gbps to achieve 10-Gigabit speed.
In fiber-based Gigabit Ethernet, the 10GBASE-SR, 10GBASE-LR, and 10GBASE-ER LAN-PHY standards have WAN-PHY equivalents called 10GBASE-SW, 10GBASE-LW, and 10GBASE-EW. There is no WAN-PHY standard corresponding to 10GBASE-LX4.
WAN-PHY standards are designed to operate across high-speed systems such as SONET and SDH. These systems are often telco operated and can be used to provide high-speed data delivery worldwide. WAN-PHY 10-Gigabit Ethernet operates within SDH and SONET using an SDH/SONET frame running at 9.953 Gbps without the need to directly map Ethernet frames into SDH/SONET.
WAN-PHY is transparent to data—from the user’s perspective it looks exactly the same as LAN-PHY.
10-Gigabit Ethernet over Copper
10GBASE-CX4 is a standard that enables Ethernet to run over CX4 cable, which consists of four twinaxial copper pairs bundled into a single cable. CX4 cable is also used in high-speed InfiniBand® and Fibre Channel storage applications.
Although CX4 cable is somewhat less expensive to install than fiber optic cable, it’s limited to distances of up to 15 meters. Because this standard uses such a specialized cable at short distances, 10GBASE-CX4 is generally used only in limited data center applications such as connecting servers or switches.
10GBASE-Kx is backplane 10-Gigabit Ethernet and consists of two standards. 10GBASE-KR is a serial standard compatible with 10GBASE-SR, 10GBASE-LR, and 10GBASE-ER. 10GBASE-KX4 is compatible with 10GBASE-LX4. These standards use up to 40 inches of copper printed circuit board with two connectors in place of cable. These very specialized standards are used primarily for switches, routers, and blade servers in data center applications.
10GBASE-T is the 10-Gigabit standard that uses the familiar shielded or unshielded copper UTP cable. It operates at distances of up to 55 meters (180 ft.) over existing Category 6 cabling or up to 100 meters (328 ft.) over augmented Category 6, or “6a,” cable, which is specially designed to reduce crosstalk between UTP cables. Category 6a cable is somewhat bulkier than Category 6 cable but retains the familiar RJ-45 connectors.
To send data at these extremely high speeds across four-pair UTP cable, 10GBASE-T uses sophisticated digital signal processing to suppress crosstalk between pairs and to remove signal reflections.
10-Gigabit Ethernet Applications
> 10-Gigabit Ethernet is already being deployed in applications requiring extremely
> As a lower-cost alternative to Fibre Channel in storage area networking (SAN)
> High-speed server interconnects in server clusters.
> Aggregation of Gigabit segments into 10-Gigabit Ethernet trunk lines.
> High-speed switch-to-switch links in data centers.
> Extremely long-distance Ethernet links over public SONET infrastructure.
Although 10-Gigabit Ethernet is currently being implemented only by extremely high-volume users such as enterprise networks, universities, telecommunications carriers, and Internet service providers, it’s probably only a matter of time before it’s delivering video to your desktop. Remember that only a few years ago, a mere 100-Mbps was impressive enough to be called “Fast Ethernet.”
- Pdf Drawing...
10-Gigabit Multimode, 50-Micron Fiber Optic Patch Cable, Zipcord, PVC, LC%X96ST PDF Drawing
PDF Drawing for EFNT010-LCST Series (Version 1)
Black Box Explains...The MPO connector.
MPO stands for multifiber push-on connector. It is a connector for multifiber ribbon cable that generally contains 6, 8, 12, or 24 fibers. It is defined by IEC-61754-7 and EIA/TIA-604-5-D,... more/see it nowalso known as FOCIS 5. The MPO connector, combined with lightweight ribbon cable, represents a huge technological advance over traditional multifiber cables. It’s lighter, more compact, easier to install, and less expensive.
A single MPO connector replaces up to 24 standard connectors. This very high density means lower space requirements and reduced costs for your installation. Traditional, tight-buffered multifiber cable needs to have each fiber individually terminated by a skilled technician. But MPO fiber optic cable, which carries multiple fibers, comes preterminated.
Just plug it in and you’re ready to go.BR>
MPO connectors feature an intuitive push-pull latching sleeve mechanism with an audible click upon connection and are easy to use. The MPO connector is similar to the MT-RJ connector. The MPO’s ferrule surface of 2.45 x 6.40 mm is slightly bigger than the MT-RJ’s, and the latching mechanism works with a sliding sleeve latch rather than a push-in latch.
The MPO connector can be either male or female. You can tell the male connector by the two alignment pins protruding from the end of the ferrule. The MPO ferrule is generally flat for multimode applications and angled for single-mode applications.
MPO connectors are also commonly called MTP® connectors, which is a registered trademark of US Conec. The MTP connector is an MPO connector
- Pdf Drawing...
10-Gigabit Multimode, 50-Micron Fiber Optic Patch Cables, Zipcord, PVC, LC%X96LC, PDF Dra
PDF Drawing of EFNT010-LCLC Series (Version 1)
Black Box Explains...How fiber is insulated for use in harsh environments.
Fiber optic cable not only gives you immunity to interference and greater signal security, but it’s also constructed to insulate the fiber’s core from the stress associated with use in... more/see it nowharsh environments.
The core is a very delicate channel that’s used to transport data signals from an optical transmitter to an optical receiver. To help reinforce the core, absorb shock, and provide extra protection against cable bends, fiber cable contains a coating of acrylate plastic.
In an environment free from the stress of external forces such as temperature, bends, and splices, fiber optic cable can transmit light pulses with minimal attenuation. And although there will always be some attenuation from external forces and other conditions, there are two methods of cable construction to help isolate the core: loose-tube and tight-buffer construction.
In a loose-tube construction, the fiber core literally floats within a plastic gel-filled sleeve. Surrounded by this protective layer, the core is insulated from temperature extremes, as well as from damaging external forces such as cutting and crushing.
In a tight-core construction, the plastic extrusion method is used to apply a protective coating directly over the fiber coating. This helps the cable withstand even greater crushing forces. But while the tight-buffer design offers greater protection from core breakage, it’s more susceptible to stress from temperature variations. Conversely, while it’s more flexible than loose-tube cable, the tight-buffer design offers less protection from sharp bends or twists. collapse | 1 | 3 |
<urn:uuid:c06d2fea-a84c-4bc0-9255-f21b63716828> | Bass instruments — whether acoustic, electric or electronic — are crucial to the majority of modern music. We all know we need to make the bass end work and, for most contemporary styles at least, that we need to make the bass part 'pump' and work with the drums to establish a compelling groove. To the beginner, though, getting things right on the bass end of the mix can seem a mysterious art, not to mention hugely frustrating. We can point an uncertain finger of blame: it's 'too muddy', 'too deep', 'too boxy', or it's 'ill-defined', 'too quiet' or 'doesn't punch through the mix' (and many more expletive-laden phrases besides). This article explores the theory behind some common problems, and suggests tips and techniques to overcome them.
'Bass' has several meanings — it is the name of an instrument (or, rather, several instruments), and of a drum. More generally, it refers to a portion of the frequency spectrum. So it's worth saying up front that where I'm discussing instruments, I'll name them to avoid confusion ('kick' for the drum, 'bass guitar' for, erm, the bass guitar, for example). When I'm talking about bass as a frequency range, I'll mean the range from roughly 60Hz to 250Hz. Frequencies below that I'll call 'sub-bass', and higher than that, well, I'll make sure you know what I'm talking about...
As creative types, we like to think of ourselves as artists but, as with all things audio-related, a little bit of the science can help us. Sound is a mechanical vibration that produces waves, which travel (mostly) through air molecules. We detect these vibrations through our hearing apparatus (mainly our ears and brain). So there are three areas to consider here: how we generate vibrations that make good bass sounds; what happens to low-frequency sound waves before they reach us; and how we detect and interpret the vibrations.
We create bass sounds using acoustic instruments (for example, kick drum, piano, pipe organ, or double bass), electric instruments (bass guitar, electric double bass) that are then amplified, or electronic instruments (hardware synths and virtual instruments) that are then amplified. Drums, pipe organs, tubas and esoterica aside, most acoustic and electric bass instruments use strings. Even synthesizers owe a great debt to stringed instruments, as we'll see. So it is helpful for us to understand a little bit about how strings make sound.
Imagine an acoustic bass instrument, such as a double bass. The string is fixed at both its ends to the instrument, so when the string is plucked (or picked, bowed, pinched or slapped, as you prefer) it vibrates, and the body of the instrument resonates and amplifies the sound. The rate of vibration of the string is always divisible by the string's length, so in addition to its lowest (fundamental) note, it emits higher frequencies (upper partials) that are multiples of the fundamental — in other words, simple harmonics. So if you take a low 'A' note (110Hz), the simple harmonics are multiples of 110Hz, and an EQ boost at 220Hz (second harmonic), 330Hz (third harmonic), 440Hz (fourth harmonic), and so on should pick out more of the simple harmonics, which we will perceive as reinforcing the sound of the fundamental (we'll come on to perception later). There are other factors that affect the instrument's timbre — not least the materials and construction of the body, which affects the way the instrument resonates, and the amount of sustain, for example — but the basic theory is borne out in practice, which means that you need to pay attention to the harmonic content of the bass in your mix as well as to the fundamental frequencies.
Synthesizers draw on the same theory. The classic dance bass synth is the Roland TB303. It started life as an auto-accompaniment bass machine for guitarists, and evolved into one of the most influential bass synths in modern dance music. The synth itself was very simple — a single-oscillator affair offering only sawtooth or square waveforms. Those of you who noticed the lack of sine or triangle wave in the list can give yourselves a pat on the back: in fact, most bass synths start out with a harmonically rich source, such as the sawtooth or square wave in the TB303. The sawtooth wave contains all the integer harmonics (both odd and even), whereas the square-tooth wave contains all the odd integer harmonics. Conversely, triangle waves make a poor starting point for subtractive synthesis, as they are much less harmonically rich, and sine waves have no harmonics at all, so if you filter them, all you achieve is a reduction in level. To hear such sounds in a mix you'll need to raise the level, which means that you'll use up valuable mix headroom.
From this starting point, we can quickly develop a killer bass patch. Low-pass filters can be used to remove any unwanted higher harmonic information, while high-pass filters can be used to remove any excessive low frequencies that would otherwise eat up headroom unecessarily. The sound can be made more interesting by using an envelope to automatically bring the low-pass filter down. This can be subtle, or can be dramatic, as in the classic 'acid' sound of the TB303. You can make things still more complex and interesting by layering two sounds an octave apart. As the higher octave is a harmonic of the lower one, you'll have plenty of harmonic activity relating to the lower note. Using the envelope to bring the low-pass filter down more quickly on the higher note than the lower will produce a more convincingly 'real' sound, loosely resembling the dying away of the higher-frequency harmonics that occurs in stringed instruments.
If you don't have a bass-specific enhancer, you can achieve a similar effect by sending the signal to an aux channel with a more conventional enhancer inserted on it. You then place a filter over the source channel to remove the unwanted low frequencies. However, on bass guitars, simulated amp distortion produces a more natural-sounding result.
It is worth noting that this technique does not give the same result as a full-range system — we cannot create the chest-slapping feel of powerful sub-bass in this way, for example — but it is an effective and worthwhile deception nonetheless.
Science can also help us understand why monitoring problems are particularly acute with bass frequencies. Poor low-frequency acoustics in the home studio are one of the key causes of poor bass sound in mixes, because what sounds great in an ill-treated home studio isn't an accurate representation of the sound that is actually being generated. It is not uncommon in such a studio to find dips of 35dB at more than one frequency in the bass end — and no matter how good the equipment you are using, you'll not get anything to sound good if this is the case. Similar considerations apply to the recording environment: you need to be very careful about what the musician hears and about microphone placement (if the mic is slap in the middle of a dip at 80Hz then you're in trouble!).
A common misconception is that using more dense materials in walls, ceilings and floors will help. The key source of problems is standing waves, which result from uncontrolled reflections. The more dense the material, the less sound passes through and the more is reflected, and although bass waves will travel more effectively through such material than higher-frequency waves, what doesn't pass through will still be reflected.
It is also important to consider your monitoring system. A full-range 2.1 system can sound great, and generate deep bass frequencies, but you need to bear in mind the audience for your music. Much modern music is consumed via laptops, iPods, TV and radio, where you won't be able to hear all that bass, and a more restricted system might be more appropriate for testing. Conversely, if you are targeting your music at dance clubs, with powerful full-range systems, then your mixes might sound wimpy if you don't test them on a bigger system, in a bigger space. Using a club system should be ideal, but even then, you need to bear in mind that it will sound different when the club is empty.
Hugh Robjohns' article on subwoofers on page 94 of this SOS explores these issues in more detail, so I won't dwell on them here. However, I can't stress enough the importance of good acoustic treatment, and more specifically of good bass trapping in your studio. This is one of the areas that still sets 'pro' studios apart from the rest, and if you don't get this right, then your recording and your mixing will suffer.
If things weren't complicated enough, we then have to throw into the equation the fact that we hear bass differently from other frequencies: welcome to the world of psychoacoustics.
Assuming that we have a well-treated room, we can see using a spectrum analyser how much headroom the bass frequencies are taking up. While this gives an accurate picture of what is actually happening, it does not reflect what we perceive to be happening — and there can be a world of difference. The brain is a complex beast, and when it comes to hearing, it is pre-programmed to 'translate' sounds in a certain way.
First, our own 'frequency response' is not flat. We perceive low frequencies and high frequencies as being quieter than the mid-range. This already complex frequency response also varies according to loudness (the actual response to different frequencies at different SPLs is illustrated in the well-known Fletcher-Munson Curves, or Equal Loudness Contours). Because the mid-range seems louder, we perceive more detail there than we do in anything occupying the bottom end. If you're having difficulty following this concept, imagine a bass guitar part doubling a guitar part: if both are played at the same level but the bass plays an octave or two below the guitar, the guitar will seem to be louder.
There is a similar situation with our perception of pitch. As the sound pressure level increases, we perceive a slight drop in pitch that varies with frequency: the higher the level, and the lower the frequency of a sound, the greater the perceived drop in pitch. So, even setting aside health issues, monitoring too loud can distort our perception of what's in tune and what isn't. In a large room, a monitoring level of around 85dB SPL is accepted as one that will allow you to translate your mixes acceptably to a good range of systems — but this will still be too loud for a typical small home studio, where something closer to 79dB SPL is likely to be better.
There are other interesting psychoacoustic phenomena that can help us with our bass end. The perception of loudness takes into account both the amplitude and duration of a sound, which is why short percussive sounds don't sound as loud as sustained sounds of the same peak level. In other words, you may be able to make a bass sound appear louder by lengthening the notes, though there quickly comes a point where this offers no further gain.
We've all experienced optical illusions, and our hearing can be similarly tricked. Yamaha's research in the '80s revealed we get our strongest impression of a sound's timbre from its attack portion. This is important for our bass for two reasons. First, by layering more percussive sounds on top of the deeper bass, we can create the impression of a sharper attack (the classic example would be a nice, slapping kick drum, layered with a deeper, longer bass sound). Secondly, when applying compression, it is crucial that the attack portion of the bass or kick is not squashed beyond recognition.
One of the most interesting tricks we can use is to generate harmonic content but remove the fundamental note. If we can hear all the harmonics present, then we perceive the fundamental to be there — no matter whether it is actually there or not. This principle is put excellently into practice by processors such as Waves' Maxx Bass and Renaissance Bass plug-ins (see box), and is particularly effective when translating full-range mixes to speaker or headphone systems such as you might have with your iPod, television or computer's built-in speakers.
Our locational perception is also different for bass. We can detect the location of the source of mid- and high-frequency sounds through the difference in intensity between the sound at each ear; the head casts an acoustic 'shadow' over the ear farthest from the sound. For bass frequencies, by contrast, we sense location from the slight time difference in the wave hitting each ear. Hugh's subwoofer article is worth a read if you want to know more about this.
Finally, I should mention the masking effect. Where two similar sounds occupy the same frequency range, the louder one will tend to mask the quieter one. This is one reason why things can get so 'muddy' — particularly with heavy guitar tracks, where the low end of the guitars often competes with the upper frequencies of the bass and kick. This is another reason why it is important to choose kick and bass sounds that complement, rather than compete with each other.
First, create a stereo Group channel (or, if you prefer, a stereo FX channel) and insert a noise gate — the one that comes with Cubase 4 will work just fine, or you can use the Dynamics plug-in on earlier versions. Now, you need to route the bass and kick signals to different sides of the Group: select the Group as a send on the bass and kick channels. You need to use the routing view of the send so that you can pan the send of the kick extreme left, and the bass extreme right. Now, either turn down the level of the bass send so it is barely audible, or mute the send, as we need to focus on the kick first of all. Set the gate's attack and release times to very low values (so it responds faster to the input signal) and lower the threshold until the kick is triggering the gate. If you turn the bass send back on, you'll notice the bass is pulsing in the right speaker in time with the kick in the left. You need then to get rid of the kick and to pan the bass centrally. To do this, insert an imaging plug-in — the screenshots show how to do this with MDA's freeware Image (www.mda-vst.com). Mode, S-Pan and Output sliders are set far right, so that you're only listening to the bass, now in the centre. The gate is still being triggered by the kick signal. All that remains is to fine-tune the release time of the gate and balance the signal with the original bass using the main faders. If your sequencer has dedicated side—chain capabilities, you can achieve this without the need for the imaging plug-in.
OK, enough science, let's move on to matters practical. Given that the harmonic content of bass and the attack portion of the envelope are so critical to our perception of the sound, it is important to make sure we capture these along with the deep bass — we can always remove what we don't need using filtering at the mix stage. It should go without saying that you'll get the best results from a good player, playing a good instrument, in a good room! For bass guitar, it's also worth making sure that the strings are reasonably new: the older and more grease-laden they are, the duller they'll sound. Some people deliberately use old strings for this reason, but most prefer the brighter sound of new strings. And always (I mean always!) make sure the bassist has tuned the bass before you start recording.
The DI signal from a bass guitar can sound a bit brittle, and on its own can lack 'oomph', but if nothing else, it offers you an insurance policy, as you can re-amp the signal, or run it through an amp modelling processor later on if needs be. Some DI boxes, such as the Sansamp Bass Driver, include amp and/or speaker simulators, which warm the sound and roll off the top end a bit. They do make the sound nicer, but while I'd happily use these for live work, personally I'd rather keep the DI clean as it affords greater flexibility at the mix stage. The DI signal can also be particularly useful when taken alongside a miked amp signal, as you can be sure you've caught the very deep end of the bass that some amps will not give you.
Bass amp and speaker cabinet modelling has come on dramatically in recent years. The popular Line 6 Bass Pod was one of the first convincing units to offer the enable you to simply 'dial up' a classic tone, but modelling is no longer the preserve of hardware. Native Instruments' Guitar Rig 2 is as comfortable working with bass guitars as any other sort (even if the options are a little more limited), but more recently, IK Multimedia's Ampeg SVX (reviewed in SOS November 2006) has really upped the stakes. In fact, the results for some styles are arguably as good as a top-notch recording (and much more convenient). Such modelling processors are, of course, great on bass guitar, but it is also worth thinking about experimenting with them on software synths. The classic sound of many hardware bass synths results in part from analogue distortion of a similar nature to that produced by an amp, and speaker modelling will roll off some of the higher harmonics that can clash with other sounds in the mix.
Sometimes, though, you just can't beat the sound of real moving air. If you have a good bassist who knows how to get a good sound out of their instrument and their amp, then it's worth having a go at recording it in the good old-fashioned way: with microphones.
Some mics are intended specifically for kick drums and bass instruments. These include the AKG D12 and D112, the Audix D6 and Shure's Beta 52A. As with other good studio workhorse dynamic mics such as the Shure SM57, the Electrovoice RE20 and Sennheiser MD421, these are designed to withstand high sound pressure levels, and from this point of view they are ideal for bass applications. They also have a frequency response tailored for typical bass sounds, with good low-frequency response, a slightly scooped mid and a peak somewhere around the 3-4kHz area, designed to bring out more of the attack. Such mics can make an excellent choice, but, given the far from flat frequency response, they impose quite a strong character of their own, so although they'll work well for some sounds, they're unlikely to be the best choice on every occasion.
It is worth considering a more neutral mic, with a reasonably flat frequency response, as this will better capture the sound in the recording room. Condenser mics are the best choice here. Of course, you don't want to be putting your most sensitive mic right up near the grille (or inside a kick drum for that matter). However, there are many good FET condensers that will do a good job, such as the Neumann U87 and U47, or the more modestly priced TLM103, for example. The AKG C414 is another popular choice and there are many others from different manufacturers, so it is worth trying different mics out if you get the opportunity.
Mic polar patterns are an important consideration too (see Paul White's article in SOS March 2007). With a cardioid mic, for example, you can use the promiximity effect to add more warmth to the sound. If you need to achieve separation from other instruments, a figure-of-eight mic can an excellent choice under some circumstances, while an omni will give you more natural-sounding results, which can be particularly nice on acoustic instruments. On a bass cabinet, mic positioning is also important: as with guitar amps, positioning the mic away from the centre gives a warmer tone than one pointing at the centre.
Some engineers swear that they get the best sound by combining the signals of different microphones and DIs. For example, you could try a combination of DI, a 'kick' mic close to your bass cabinet, and a good condenser a little further away. Or you could try two close mics, one pointing at the centre of your bass cab, the other towards the edge, so you capture more of the sound of the amp. Another trick is to use something like an SM58 a couple of feet from the cab in conjunction with a closer dynamic mic. Compressing the SM58 signal and balancing that sound with the signal from the kick mic can help to give things some edge. If using multiple mics, it is worth taking the time to get the phase relationships sorted while recording.
Getting a good recording is one thing, but making it work with the rest of the mix is a rather different pot of poissons. Some producers start a mix with the 'feature' instrument (such as lead vocals), but it can make sense to start by getting a good balance and groove going between the kick, bass and snare, as this provides a solid foundation for the rest of the track. Be aware, though, that your perception of what works will be very different when parts are soloed than in the full mix, and you'll almost certainly need to revisit things later.
The average listener will focus mostly on musical performance, so if the timing and tuning are all over the place, it's all for nothing. If you've programmed things in, that's fine — you'll have had plenty of control. But if the parts were played in, then there's probably some tidying up to do. It's a little more complicated than simply quanitising everything. The key is to get things to work together, and it's no good having your bass notes working to a metronome if the drummer drifted away from it. Though there are some automated ways to do this sort of thing, I don't find they save time, as you need to go through to check the results, and I still find that the best way is to go in and adjust the offending notes manually. In some cases, time-stretching notes (or replacing them with the same note from another take) so that the note length better fits the groove can work well too.
There are few 'rules' in music production, but panning bass isn't far off. It is usually a good idea to pan the bass and kick to the centre. Partly this is historical (the limitations of vinyl) but, more importantly, it shares the bass energy equally between the two stereo speakers. It is also important because the listener will not always be in the sweet spot, and given that the bass is so critical to the mix, you want them to hear it wherever they are in relation to the speakers (this applies to dance music as much as any other — you want all the clubbers to feel the same bass groove).
Bass is usually more heavily compressed or limited than other sounds. This irons out peaks, and helps the groove to feel solid, and to underpin the rest of the mix. The attack and release settings in particular are critical. Too short an attack, and you'll squash the important attack phase of the note. Too long a release time and you'll ruin the groove. If you let the attack phase of the note through, then it's also a good idea to place a limiter after the compressor, in order to catch any wild peaks, and leave you more room for make-up gain so you can increase the level without peaking.
A common trick to increase the impact of the bass is to send the kick and bass to the same compressor and bring the compressed signal back in quite low, just to glue things together.
Compression will tend to emphasise the predominant tone of whatever is being compressed, so it makes sense to place an EQ before the compressor, to shape the sound that you want to emphasise. You can always place another one after the compressor too, so that you can sculpt things to better fit the mix.
As most consumer systems start to roll off around 80Hz, you can tailor your sound to them by placing a sharp high-pass filter at about 50Hz and applying a gentle boost around 80Hz. This won't be good for club systems, of course!
It is worth listening to the kick and bass parts at the same time when you are EQ'ing, as you need them to work together. If you find that they are competing, you can EQ them around eachother. A slight, narrow peak in one and a correlating dip on the other can help to achieve this.
If your bass sounded fine on its own, but lost clarity and energy when you finished laying down your umpteenth vocal or guitar overdub, then it is worth looking at the other sounds. High-pass filtering your guitars, vocals and other instruments in the low-mid range can help you get back the space, and avoid a nightmare muddy quagmire. You might be surprised just how high you can set a high-pass filter on guitars (particularly acoustics) and get away with it. The results may sound horrid in isolation, but as long as they work with the rest of the mix, it's not a problem. The fewer instruments that are competing in the same frequency range with the bass, the clearer and tighter your bass will sound. The same applies to panning: given that you probably already have bass, kick, snare, hi-hat and vocal somewhere down the middle, trying to pan other things out a little can help to leave space for the upper reaches of your bass instruments.
If you still find that your bass isn't cutting through, it may be down to a lack of harmonics, or the slappier attack part of the sound, as discussed earlier. If EQ isn't bringing things out, an enhancer may be the perfect tool for the job (see box), while you can emphasise the attack using a hardware processor such as SPL's Transient Designer, or a software equivalent such as Waves' Trans X, Digital Fishphones' Dominion, or the Envelope Shaper that is bundled with Cubase 4. Alternatively, you could try adding a little distortion. A common trick is to use distortion as a send effect, mixing the distorted sound back in at a low level. Tube amps (or their software equivalents) are perfect for this sort of thing.
Effects can, of course, add interest to your bass part. But they can also be effective in making it more audible in your mix. The best sort of effects, other than the usual distortion or fuzz, are modulation effects that make the sound sweep. You don't have to go crazy, but some subtle flanging, phasing or wah can work wonders.
Probably the trickiest effects for bass are delay and reverb. You want to hear each bass note individually so, given the masking effect, it is usually not a good idea to use delays or long reverbs that merge into the main sound. Where I've used delays, it's been to create more of an arpeggiator effect, adding whole notes in between spaced ones, rather than low, continuing repeats. However, a very short slapback can help to locate things. For reverb, try to keep things short. You might also find that a little pre-delay can help to separate out the reverb from the source signal, which can improve clarity.
Synths can easily produce very low pitches, whereas the bass guitar can only go so low. You can of course choose a bass that has a good low sound (Musicman basses, for example, are noted for this), and play differently to get the most from the lower notes (for example, playing further from the bridge), but sometimes it just won't seem low enough, particularly if it is competing for your audience's attention with deep synth basses on other tracks. So how do you get the same depth and power from a bass guitar without sacrificing too much of the tone?
Well, just as you can generate higher-frequency harmonics, so you can generate lower-frequency information that's related to the source material. One tool you can use is the octave divider, which creates a signal an octave (or more) below the source, calculated, as the name implies, by dividing the frequency in two. While it can be an interesting effect, the result is a little crude and quite distinctive. A better result can be obtained using sub-synth. These devices are, in effect, gates that trigger a low-frequency synth — you set the threshold and select the trigger frequency range and the synth auto-accompanies the source. It won't be the same tone as the original bass part, but at this level, it really doesn't matter, and the tone of the original bass part remains pretty much intact. There are a few such plug-ins available: one comes bundled with Logic Pro, for example, and there's also a freeware one for Mac and PC from MDA. My current favourite is Lowender, by ReFuse, which runs on the Pluggo platform (unfortunately, it is Mac-only, but a PC version is in the pipeline). | 1 | 9 |
<urn:uuid:1c84b5d1-7d3e-4e17-be8b-9a77059d975a> | A router is a device that forwards data packets between computer networks, creating an overlay internetwork. A router is connected to two or more data lines from different networks. When a data packet comes in one of the lines, the router reads the address information in the packet to determine its ultimate destination. Then, using information in its routing table or routing policy, it directs the packet to the next network on its journey. Routers perform the "traffic directing" functions on the Internet. A data packet is typically forwarded from one router to another through the networks that constitute the internetwork until it reaches its destination node.
The most familiar type of routers are home and small office routers that simply pass data, such as web pages, email, IM, and videos between the home computers and the Internet. An example of a router would be the owner's cable or DSL modem, which connects to the Internet through an ISP. More sophisticated routers, such as enterprise routers, connect large business or ISP networks up to the powerful core routers that forward data at high speed along the optical fiber lines of the Internet backbone. Though routers are typically dedicated hardware devices, use of software-based routers has grown increasingly common.
When multiple routers are used in interconnected networks, the routers exchange information about destination addresses using a dynamic routing protocol. Each router builds up a table listing the preferred routes between any two systems on the interconnected networks. A router has interfaces for different physical types of network connections, (such as copper cables, fiber optic, or wireless transmission). It also contains firmware for different networking Communications protocol standards. Each network interface uses this specialized computer software to enable data packets to be forwarded from one protocol transmission system to another.
Routers may also be used to connect two or more logical groups of computer devices known as subnets, each with a different sub-network address. The subnets addresses recorded in the router do not necessarily map directly to the physical interface connections. A router has two stages of operation called planes:
- Control plane: A router records a routing table listing what route should be used to forward a data packet, and through which physical interface connection. It does this using internal pre-configured addresses, called static routes.
- Forwarding plane: The router forwards data packets between incoming and outgoing interface connections. It routes it to the correct network type using information that the packet header contains. It uses data recorded in the routing table control plane.
Routers may provide connectivity within enterprises, between enterprises and the Internet, and between internet service providers (ISPs) networks. The largest routers (such as the Cisco CRS-1 or Juniper T1600) interconnect the various ISPs, or may be used in large enterprise networks. Smaller routers usually provide connectivity for typical home and office networks. Other networking solutions may be provided by a backbone Wireless Distribution System (WDS), which avoids the costs of introducing networking cables into buildings.
All sizes of routers may be found inside enterprises. The most powerful routers are usually found in ISPs, academic and research facilities. Large businesses may also need more powerful routers to cope with ever increasing demands of intranet data traffic. A three-layer model is in common use, not all of which need be present in smaller networks.
Access routers, including 'small office/home office' (SOHO) models, are located at customer sites such as branch offices that do not need hierarchical routing of their own. Typically, they are optimized for low cost. Some SOHO routers are capable of running alternative free Linux-based firmwares like Tomato, OpenWrt or DD-WRT.
Distribution routers aggregate traffic from multiple access routers, either at the same site, or to collect the data streams from multiple sites to a major enterprise location. Distribution routers are often responsible for enforcing quality of service across a WAN, so they may have considerable memory installed, multiple WAN interface connections, and substantial onboard data processing routines. They may also provide connectivity to groups of file servers or other external networks.
External networks must be carefully considered as part of the overall security strategy. Separate from the router may be a firewall or VPN handling device, or the router may include these and other security functions. Many companies produced security-oriented routers, including Cisco Systems' PIX and ASA5500 series, Juniper's Netscreen, Watchguard's Firebox, Barracuda's variety of mail-oriented devices, and many others.
In enterprises, a core router may provide a "collapsed backbone" interconnecting the distribution tier routers from multiple buildings of a campus, or large enterprise locations. They tend to be optimized for high bandwidth, but lack some of the features of Edge Routers.
Internet connectivity and internal use
Routers intended for ISP and major enterprise connectivity usually exchange routing information using the Border Gateway Protocol (BGP). RFC 4098 standard defines the types of BGP-protocol routers according to the routers' functions:
- Edge router: Also called a Provider Edge router, is placed at the edge of an ISP network. The router uses External BGP to EBGP protocol routers in other ISPs, or a large enterprise Autonomous System.
- Subscriber edge router: Also called a Customer Edge router, is located at the edge of the subscriber's network, it also uses EBGP protocol to its provider's Autonomous System. It is typically used in an (enterprise) organization.
- Inter-provider border router: Interconnecting ISPs, is a BGP-protocol router that maintains BGP sessions with other BGP protocol routers in ISP Autonomous Systems.
- Core router: A core router resides within an Autonomous System as a back bone to carry traffic between edge routers.
- Within an ISP: In the ISPs Autonomous System, a router uses internal BGP protocol to communicate with other ISP edge routers, other intranet core routers, or the ISPs intranet provider border routers.
- "Internet backbone:" The Internet no longer has a clearly identifiable backbone, unlike its predecessor networks. See default-free zone (DFZ). The major ISPs system routers make up what could be considered to be the current Internet backbone core. ISPs operate all four types of the BGP-protocol routers described here. An ISP "core" router is used to interconnect its edge and border routers. Core routers may also have specialized functions in virtual private networks based on a combination of BGP and Multi-Protocol Label Switching protocols.
- Port forwarding: Routers are also used for port forwarding between private internet connected servers.
- Voice/Data/Fax/Video Processing Routers: Commonly referred to as access servers or gateways, these devices are used to route and process voice, data, video, and fax traffic on the internet. Since 2005, most long-distance phone calls have been processed as IP traffic (VOIP) through a voice gateway. Voice traffic that the traditional cable networks once carried[clarification needed]. Use of access server type routers expanded with the advent of the internet, first with dial-up access, and another resurgence with voice phone service.
Historical and technical information
The very first device that had fundamentally the same functionality as a router does today, was the Interface Message Processor (IMP); IMPs were the devices that made up the ARPANET, the first packet network. The idea for a router (called "gateways" at the time) initially came about through an international group of computer networking researchers called the International Network Working Group (INWG). Set up in 1972 as an informal group to consider the technical issues involved in connecting different networks, later that year it became a subcommittee of the International Federation for Information Processing.
These devices were different from most previous packet networks in two ways. First, they connected dissimilar kinds of networks, such as serial lines and local area networks. Second, they were connectionless devices, which had no role in assuring that traffic was delivered reliably, leaving that entirely to the hosts (this particular idea had been previously pioneered in the CYCLADES network).
The idea was explored in more detail, with the intention to produce a prototype system, as part of two contemporaneous programs. One was the initial DARPA-initiated program, which created the TCP/IP architecture in use today. The other was a program at Xerox PARC to explore new networking technologies, which produced the PARC Universal Packet system, due to corporate intellectual property concerns it received little attention outside Xerox for years.
Some time after early 1974 the first Xerox routers became operational. The first true IP router was developed by Virginia Strazisar at BBN, as part of that DARPA-initiated effort, during 1975-1976. By the end of 1976, three PDP-11-based routers were in service in the experimental prototype Internet.
The first multiprotocol routers were independently created by staff researchers at MIT and Stanford in 1981; the Stanford router was done by William Yeager, and the MIT one by Noel Chiappa; both were also based on PDP-11s.
Virtually all networking now uses TCP/IP, but multiprotocol routers are still manufactured. They were important in the early stages of the growth of computer networking, when protocols other than TCP/IP were in use. Modern Internet routers that handle both IPv4 and IPv6 are multiprotocol, but are simpler devices than routers processing AppleTalk, DECnet, IP, and Xerox protocols.
From the mid-1970s and in the 1980s, general-purpose mini-computers served as routers. Modern high-speed routers are highly specialized computers with extra hardware added to speed both common routing functions, such as packet forwarding, and specialised functions such as IPsec encryption.
There is substantial use of Linux and Unix software based machines, running open source routing code, for research and other applications. Cisco's operating system was independently designed. Major router operating systems, such as those from Juniper Networks and Extreme Networks, are extensively modified versions of Unix software.
For pure Internet Protocol (IP) forwarding function, a router is designed to minimize the state information associated with individual packets. The main purpose of a router is to connect multiple networks and forward packets destined either for its own networks or other networks. A router is considered a Layer 3 device because its primary forwarding decision is based on the information in the Layer 3 IP packet, specifically the destination IP address. This process is known as routing. When each router receives a packet, it searches its routing table to find the best match between the destination IP address of the packet and one of the network addresses in the routing table. Once a match is found, the packet is encapsulated in the Layer 2 data link frame for that outgoing interface. A router does not look into the actual data contents that the packet carries, but only at the layer 3 addresses to make a forwarding decision, plus optionally other information in the header for hints on, for example, QoS. Once a packet is forwarded, the router does not retain any historical information about the packet, but the forwarding action can be collected into the statistical data, if so configured.
Forwarding decisions can involve decisions at layers other than layer 3. A function that forwards based on layer 2 information is properly called a bridge. This function is referred to as layer 2 bridging, as the addresses it uses to forward the traffic are layer 2 addresses (e.g. MAC addresses on Ethernet).
Besides making decision as which interface a packet is forwarded to, which is handled primarily via the routing table, a router also has to manage congestion, when packets arrive at a rate higher than the router can process. Three policies commonly used in the Internet are tail drop, random early detection (RED), and weighted random early detection (WRED). Tail drop is the simplest and most easily implemented; the router simply drops packets once the length of the queue exceeds the size of the buffers in the router. RED probabilistically drops datagrams early when the queue exceeds a pre-configured portion of the buffer, until a pre-determined max, when it becomes tail drop. WRED requires a weight on the average queue size to act upon when the traffic is about to exceed the pre-configured size, so that short bursts will not trigger random drops.
Another function a router performs is to decide which packet should be processed first when multiple queues exist. This is managed through quality of service (QoS), which is critical when Voice over IP is deployed, so that delays between packets do not exceed 150ms to maintain the quality of voice conversations.
Yet another function a router performs is called policy-based routing where special rules are constructed to override the rules derived from the routing table when a packet forwarding decision is made.
These functions may be performed through the same internal paths that the packets travel inside the router. Some of the functions may be performed through an application-specific integrated circuit (ASIC) to avoid overhead caused by multiple CPU cycles, and others may have to be performed through the CPU as these packets need special attention that cannot be handled by an ASIC.
- "Overview Of Key Routing Protocol Concepts: Architectures, Protocol Types, Algorithms and Metrics". Tcpipguide.com. Retrieved 15 January 2011.
- Requirements for IPv4 Routers,RFC 1812, F. Baker, June 1995
- Requirements for Separation of IP Control and Forwarding,RFC 3654, H. Khosravi & T. Anderson, November 2003
- "Setting uo Netflow on Cisco Routers". MY-Technet.com date unknown. Retrieved 15 January 2011.
- "Windows Home Server: Router Setup". Microsoft Technet 14 Aug 2010. Retrieved 15 January 2011.
- Oppenheimer, Pr (2004). Top-Down Network Design. Indianapolis: Cisco Press. ISBN 1-58705-152-4.
- "Windows Small Business Server 2008: Router Setup". Microsoft Technet Nov 2010. Retrieved 15 january 2011.
- "Core Network Planning". Microsoft Technet May 28, 2009. Retrieved 15 January 2011.
- Terminology for Benchmarking BGP Device Convergence in the Control Plane,RFC 4098, H. Berkowitz et al.,June 2005
- "M160 Internet Backbone Router". Juniper Networks Date unknown. Retrieved 15 January 2011.
- "Virtual Backbone Routers". IronBridge Networks, Inc. September, 2000. Retrieved 15 January 2011.
- BGP/MPLS VPNs,RFC 2547, E. Rosen and Y. Rekhter, April 2004
- Davies, Shanks, Heart, Barker, Despres, Detwiler, and Riml, "Report of Subgroup 1 on Communication System", INWG Note #1.
- Vinton Cerf, Robert Kahn, "A Protocol for Packet Network Intercommunication", IEEE Transactions on Communications, Volume 22, Issue 5, May 1974, pp. 637 - 648.
- David Boggs, John Shoch, Edward Taft, Robert Metcalfe, "Pup: An Internetwork Architecture", IEEE Transactions on Communications, Volume 28, Issue 4, April 1980, pp. 612- 624.
- Craig Partridge, S. Blumenthal, "Data networking at BBN"; IEEE Annals of the History of Computing, Volume 28, Issue 1; January–March 2006.
- Valley of the Nerds: Who Really Invented the Multiprotocol Router, and Why Should We Care?, Public Broadcasting Service, Accessed August 11, 2007.
- Router Man, NetworkWorld, Accessed June 22, 2007.
- David D. Clark, "M.I.T. Campus Network Implementation", CCNG-2, Campus Computer Network Group, M.I.T., Cambridge, 1982; pp. 26.
- Pete Carey, "A Start-Up's True Tale: Often-told story of Cisco's launch leaves out the drama, intrigue", San Jose Mercury News, December 1, 2001.
|Wikimedia Commons has media related to: Network routers|
|Wikibooks has a book on the topic of: Communication Networks/Routing|
|Look up router in Wiktionary, the free dictionary.|
- Internet Engineering Task Force, the Routing Area last checked 21 January 2011.
- Internet Corporation for Assigned Names and Numbers
- North American Network Operators Group
- Réseaux IP Européens (European IP Networks)
- American Registry for Internet Numbers
- Router Default IP and Username Database
- Router clustering
- Asia-Pacific Network Information Center
- Latin American and the Caribbean Network Information Center
- African Region Internet Registry | 1 | 4 |
<urn:uuid:584914d6-fe14-4242-88fc-2b2d27f2a51d> | SEXTANT: The Watershed
At the Casablanca, TRIDENT, and QUADRANT Conferences a strategy whose successful execution would break the blockade of China had been roughly shaped. Pledges had been given to the Chinese, notably that of TRIDENT: "No limits, except those imposed by time and circumstance, will be placed on the above operations, which have for their object the relief of the siege of China." This statement had followed on a Chinese threat to seek a separate peace. There was another question: how long could China survive blockade? Stilwell, Chennault, the President, the Prime Minister, all agreed at TRIDENT that China must have aid soon. Another powerful influence in shaping Allied strategy had been the President's wish that China be treated as a Great Power, that it join in the councils of the Great Powers as an equal.
To complete Allied plans for the relief of China, the President arranged for the Generalissimo to meet with him and Mr. Churchill at Cairo in November 1943. Then and there the threads would be drawn together. The Generalissimo would confer with his colleagues; the final details would be added to the plan for China's relief; the dignitaries would approve it; and a CCS directive to SEAC would be issued. The Cairo Conference was the high point, the watershed, that divided Sino-American relations. After Cairo, the currents flowed in a very different direction.
Drafting SEAC's Proposals
When Admiral Mountbatten on 1 November 1943 formally opened his headquarters as Supreme Allied Commander, Southeast Asia Command, one of his first tasks was to prepare a plan to submit to his superiors, who if approving it would provide the necessary additional resources, landing craft, principally. To play his part in this planning was Stilwell's first duty on leaving Chungking for India in October 1943. Independently, Mountbatten and Stilwell had come to similar conclusions on the preliminary studies prepared by General Headquarters (India) in the last phases of that body's concern with Burma operations. When General Auchinleck in September 1943 proffered a plan calling for the now-familiar converging attacks on Burma from Yunnan, Ledo, Assam, and the Arakan, Stilwell had been critical. The scheduling of the proposed several drives upset him, for he found them so separate in time as to open the prospect
of the Allies' being defeated in detail. And he added: "I understood the orders to call for 'vigorous and aggressive action' and I don't find a hell of a lot of it in the plan. However, we will proceed as indicated and perhaps our doubts will be resolved when Admiral Mountbatten arrives."1
After examining the same plans Mountbatten, too, was critical, but where Stilwell was characteristically blunt, Mountbatten was urbane: "There is also no doubt that the climate and the antiquated and close methods used in India have their effect on the keenness of officers after a year or two and so I have found that the plans made by the Indian Staff are somewhat pessimistic and unenterprising."2
When the work of preparing SEAC's proposals to the CCS, the President, the Prime Minister, and the Generalissimo began, Stilwell submitted his views as did General Headquarters (India) and the local combined planners.3
The proposals and decisions that began to form fell into two categories, those for the first phases which SEAC could execute with its own resources, and those which needed approval and support by higher authority. Almost immediately Stilwell received his orders and approval of the opening phases of ALBACORE THREE, which called for him to establish a bridgehead over the Tanai. As for the Arakan, the SEAC minutes state that General Giffard was not satisfied with the safety of Chittagong while his troops held their current positions, so he proposed that they make a twenty-mile advance to secure the Buthidaung-Maungdaw road. (See Map 1.)
Neither Admiral Mountbatten nor General Slim was content with this modest contribution. General Giffard according to the minutes "agreed that it was mainly a defensive move." So Giffard's orders were changed to call for the exploitation of any success, with Akyab the objective. It was further agreed in SEAC that whatever the ultimate objective in Burma, 4 Corps, on the central front, at some time would be obliged to advance through the noxious and malarial Kabaw Valley. The Arakan advance might begin the second week of January 1944; 4 Corps and the Chinese forces in Yunnan (Y-Force) would move out in early March; and any airborne operation would be in mid-March.4 It is notable that Stilwell was thus directed to advance into hostile territory as part of a larger operation whose objectives had not been defined and whose resources were not at hand. However, none doubted that all would be provided in due time.
Of the three courses seen as open to 4 Corps and the air-supplied light infantry of the British Long-Range Penetration Groups (LRPG's or Chindits),
the SEAC planners under General Wedemeyer, SEAC's deputy chief of staff, preferred TOREADOR, an airborne landing by two divisions in central Burma. If successful in the operation's first phases, the divisions would exploit toward Mandalay. The other alternatives considered were: (1) an overland advance toward Ye-u; (2) TARZAN, an airborne landing at Indaw on the railway to Myitkyina, a drive by the Ledo force on Myitkyina, and a bridgehead over the Chindwin to be established by 4 Corps.
Observing the drift of the planning, Stilwell grew concerned and prepared a critique on 27 October 1943 which he submitted on 3 November.5 The critique stated that all SEAC plans to date had been closely based on estimates of the logistic situation, that they had been "permeated by fear of failure or reluctance to take the bold course." Singling out TARZAN, which Mountbatten appeared to prefer, Stilwell remarked (giving an incorrect figure) that the operation comprised 80,000 troops in the Arakan,6 limited to advancing on Akyab, with nothing further contemplated; an advance from Imphal to the Chindwin River; placing a division on the railway to Myitkyina and leaving it there; an amphibious operation against the Andaman Islands, which he thought had "no immediate bearing on the main problem"; Stilwell's Chinese forces, "left to their own resources to effect a junction and open the Burma Road."
Stilwell believed that if Akyab was taken, this victory should be exploited by a series of amphibious hooks down the coast aimed at the port of Bassein. Success would give bases from which to dominate Japanese aviation in south Burma. In central Burma, he suggested a two-pronged operation aimed at Mandalay. "The Indaw operations should be cancelled." His Chinese forces would do their best in conjunction with the above. Stilwell stated:
With the large air and naval units to be committed, nothing less than the above is justified. Nothing less than this is either bold or aggressive. Nothing less takes complete advantage of our position for concentric attacks. Nothing less threatens the enemy with serious loss.
I take exception to any trend in the planning which fails to use to advantage our overwhelming strength, to any tendency towards vagueness in objectives, to any move which does not absolutely require a strong enemy reaction to check it.
Under present plans, Burma could [Stilwell's italics] be ready to fall to a vigorous attack, and for lack of trying, we might not even find this out. In other words, we are not even making a reconnaissance in force, let alone a serious attack.7
TARZAN, the plan for SEAC's share in the campaign, was nevertheless adopted by SEAC on 7 November. Behind the decision lay Mountbatten's announced desire for a guaranteed victory, his admission that he would choose the less desirable course if it promised success. TARZAN was urged by General Headquarters (India) and by Mountbatten's three commanders in chief, General Giffard, Admiral Sir James Somerville, and Air Chief Marshal Sir
Richard Peirse. General Wedemeyer commented that TARZAN would not accomplish the objectives given to SEAC in its directive. Representatives of CBI Theater headquarters at SEAC were equally critical. Even if all went well, they remarked, the monsoon rains would find SEAC having only a bridgehead over the Chindwin and an airborne division mired on the railway to Myitkyina. This seemed little to show for a season's fighting. The only consolations were that the dropping of an airborne division in Burma might open opportunities and that SEAC agreed to study operations in Burma to follow TARZAN. From the discussions that accompanied the adoption of TARZAN, CBI Theater liaison personnel received the impression that Giffard, Somerville, and Peirse were not aggressively inclined, placed no value on operations in Burma, and had staffs who were too impressed by logistical difficulties and indifferent to what might be done to improve the logistical situation. But, TARZAN it was, and the SEAC secretariat began to prepare the papers on it and on the over-all plan for Burma, now called CHAMPION, for submission to the Combined Chiefs of Staff, the President, the Prime Minister, and the Generalissimo.8 (See Map 2.)
The United States Prepares for the SEXTANT Conference
The President's diplomatic preparations for a meeting with the Generalissimo, the Prime Minister, and the Combined Chiefs of Staff had been under way since the TRIDENT Conference in Washington, May 1943. In June Mr. Roosevelt told the Generalissimo of his anxiety to meet him, and a discussion of times and places followed at once. Originally, the meeting was to have been of just the two statesmen, and the Generalissimo suggested Alaska in August or September. The course of events made the President feel ever more strongly that he should meet with Marshal Joseph V. Stalin and, of course, Mr. Churchill, and so the President began to consider co-ordinating the two meetings.
The foreign ministers' conference at Moscow in October marked further progress toward the President's goal of having China accepted as a Great Power, for Great Britain and the Soviet Union agreed to China's signing a Four Power Declaration. This agreement greatly pleased Roosevelt, who told the Generalissimo that the ice had been broken, that he and the Chinese statesman had now established the principle of China's Great Power position. To arrange a meeting between the several statesmen remained, and from this innocent circumstance the Chinese insistence on making or keeping face under any and all conditions led to great consequences. In so many words, the Generalissimo insisted on seeing Mr. Roosevelt before the latter saw Marshal Stalin, or else postponing the meeting indefinitely. Roosevelt agreed, and Brig. Gen. Patrick
J. Hurley, who acted as the President's personal representative in the Middle East, was sent to Chungking to arrange the details.9 Thus, the Generalissimo sacrificed the strategic advantage of having the last word with the President.
Even as the next meeting (SEXTANT) of the Allied statesmen was being convened, significant trends in U.S. strategy were depreciating China's importance as an ally against Japan. The increasing strength of the U.S. Navy's fast carrier task forces and the realization of the B-29's potentialities were leading the lower echelons of U.S. planners to an awareness that Japan could be defeated without a major U.S. land campaign in China. In summer 1943 the Joint Chiefs of Staff decided to use the fast carrier task forces and amphibious troops against the Japanese positions in the Gilbert and Marshall Islands. The Gilberts were to be attacked in November 1943, the Marshalls, in January 1944.10 The decision to initiate action in the Central Pacific did not, of course, by itself change China's role in the evolving strategy of the United States, but the more the fast carrier task forces prospered in their advance across the Pacific, the more islands that fell into U.S. possession, the less need there would be to seek the Generalissimo's co-operation. The means for a major thrust across the Central Pacific were coming to hand and so was the realization of China's diminishing strategic importance.
The Operations Division observed:
Despite the agreements that the United Nations should direct their principal offensive efforts against Germany and contain the Japanese by a series of relatively minor thrusts, it is becoming increasingly apparent that operations against the Japanese are approaching major proportions. Plans for the defeat of Japan are not yet firm. However, the degree of success enjoyed thus far is indicative of the need of a short-term plan for operations against Japan "upon Germany's defeat" with principal emphasis on approach from the Pacific rather than from the Asiatic mainland.11
The QUADRANT Conference, Quebec, August 1943, ordered the combined staffs to prepare a "short plan for the defeat of Japan." The planners complied on 25 October 1943. They suggested four broad possible courses of action, all of them bypassing the mainland of China. For operations in China, the Combined (i.e., Anglo-American) Staff Planners suggested only an eventual limited B-29 offensive supported through north Burma by a line of communications that would also be called on to support the Fourteenth Air Force and the re-equipping of the Chinese Army.
Of the four proposed courses, the recommended one included taking
Formosa in spring 1945, while retaining the option of taking Sumatra in spring or autumn 1945 if the Formosa operation had to be postponed. The planners concluded there was no prospect of defeating Japan by October 1945. The Central Pacific course of action included capture of the Marshall, Caroline (Truk area), Palau, and Mariana Islands. If Truk was bypassed, the advance might reach the Marianas in July 1944; Truk, in November 1944; and the Palaus, by early 1945. It was recognized that good bomber fields could be built in the Marianas.
The recommendation to the Combined Chiefs noted that in response to the Air Plan for the Defeat of Japan CBI Theater had suggested basing eight B-29 groups at Calcutta and staging them through Cheng-tu. The Combined Staff Planners had not weighed this proposal in detail but thought it might well be feasible. Their own plan called for sending 2,000 B-24's to India immediately after Germany's defeat and with them flying supplies to China to begin preparations for the reception of B-29's en masse.12
With the CCS advisers thinking of a major effort through the Pacific and of bypassing China, criticism of existing strategy for the mainland of Asia developed. As defined by the Strategy Section of the Operations Division (OPD), the current plan called for keeping China in the war as an effective ally in order to use Chinese bases to bomb the Japanese islands. A great converging attack from east and west was contemplated, to open the Hong Kong-Canton area as a base from which to launch a drive that would open a line of communications to the North China Plain. This strategy seemed defective because it was not co-ordinated with the major effort being planned for the Pacific, which included bombing Japan from the Mariana Islands in January 1945 and launching the final air and amphibious assault on the Japanese homeland not later than mid-1946. The plan of securing Chinese bases seemed too costly in men and matériel for the advantages it would yield, mainly, the chance to bomb Japan. Using Chinese bases to the fullest extent would probably require the conquest of all Burma in order to reopen the line of communications from Rangoon northward. The Strategy Section, OPD, considered that the situation in Asia, despite all earlier efforts, continued to be bad. China was still an ineffective ally, and Indian forces could not mount a major offensive. The Assam line of communications was still no better. Japan was improving her defensive position, while current U.S. strategy in Asia called for no effective blow at Japan proper before 1946.
Therefore, the Strategy Section of OPD recommended that the present approved undertakings to keep China in the war as an effective ally be fulfilled; that a limited bomber offensive from China be mounted as insurance for the Pacific effort; that no further commitments be made to CBI Theater; and that no more than thirty Chinese divisions be trained and equipped, plus three more divisional sets of equipment to be used in beginning the training of the Second
Thirty Divisions in east China. The report was innocent of diplomatic considerations; its thought was that the goal of the Pacific war was the military defeat of Japan.13
The next voices raised were those of the members of the Joint Strategic Survey Committee (JSSC), placing their views before the JCS on the eve of SEXTANT. A small group of distinguished senior officers, taking the broad detached view, they spoke with the weight of long experience. Though they were in general agreement with the QUADRANT decisions, they did think these should be reappraised in the light of the recent studies of the problem of speeding Japan's defeat, which had shown the great importance of taking the Marianas as bases for the B-29's. The JSSC stated:
We feel that without depreciating the importance of the effort against Japan by way of China, the key to the early defeat of Japan lies in all-out operations through the Central Pacific with supporting operations on the Northern and Southern flanks, using all forces, naval, air, and ground, that can be maintained and employed profitably in these areas. We believe that this principle and the related principle that operations from the West (via Singapore) would be of a diversionary nature have not been sufficiently recognized and emphasized.14
Therefore, by the time the SEXTANT Conference met, important agencies among the United States' planners were counseling a reappraisal of the United States strategy. Had the Chinese been zealous and industrious in preparations for a campaign in Burma, had they accepted and carried out Stilwell's suggestions for a potent Chinese Army of sixty divisions, and had the Generalissimo in March 1943, against whatever odds, crossed the Salween River into Burma, the United States would have been morally obligated to support the Chinese in projects it had persuaded them to undertake. Nor could India Command have held back if Chinese troops tried to liberate a major portion of the Commonwealth. But the Chinese had not thought in those terms, the months had gone past, and now American planners were beginning to conclude that they could defeat Japan without Chinese bases and without a rejuvenated Chinese Army. The recommendations which the Strategy Section of OPD made to arm thirty-three Chinese divisions, in November 1943, complemented the conclusions that Stilwell had reached one month before. Stilwell's superiors were quietly discarding the mission they had given him in February 1942, "to assist in improving the combat efficiency of the Chinese Army"; by implication, other tasks would be forthcoming.15
The U.S. advance across the Central Pacific began 20 November when U.S. forces landed in the Gilberts group. After seventy-two hours of fighting, some of it of the most desperate nature, the Marines had their objective. American sea power had taken a giant stride closer toward Japan.
The Chinese Prepare for SEXTANT
Having expressed his opinions on the proposed plan for SEAC's share in Burma operations, Stilwell left his liaison personnel to participate in the final discussions and returned to Chungking to inform the Generalissimo of the trend of SEAC's thinking and to prepare with him for the forthcoming meeting between the President and the Generalissimo. The Generalissimo was markedly pleasant and co-operative. After the events of October Stilwell was extremely skeptical of the Generalissimo's sincerity, but work must be done before the forthcoming conference, and Stilwell applied himself to it.16
At the suggestion of his friends, Mesdames Chiang Kai-shek and H. H. Kung, Stilwell, as Joint Chief of Staff, China Theater, on 5 November 1943 prepared and submitted a report to the Generalissimo on SEAC planning and Y-Force's progress in its preparations to attack from Yunnan. Telling the Generalissimo that no final SEAC plans had been made, Stilwell pointed out that "it is certain" the Chinese would be expected to make a converging attack from Assam and Yunnan into north Burma. "If for any reason the Y-force does not attack, the British [military] will have an excellent argument for giving up any plans for reopening communications with China. They have contended that the Chinese army is incapable of fighting and that there is no use in trying to build it up; failure to fight now will tend to prove them right. . . ." Then Stilwell explained why the Y-Force was not ready:
The long delay in furnishing replacements has left all divisions far below strength. . . .
The training has not yet reached the bulk of the men. . . .
The equipment brought in from India has not been distributed. There has been trouble in getting the Chinese supply agencies to take this equipment, and unusual delay in getting it into the hands of the troops. Some divisions are so weak that they cannot take care of their quota.
The majority of the men are physically incapable of sustaining prolonged hardship. . . .
The high-ranking officers generally have no offensive spirit. . . .
Insufficient trucks and animals have been provided. [Stilwell asked that the Generalissimo issue the necessary corrective orders in the most forceful manner, and closed by warning that] It is too late already for half measures, or further delays; where a few months ago corrective measures could have been taken in an orderly manner, it is now too late for any but the most drastic and thorough-going action.17
The Generalissimo took this candor in good part. He promised 50,000 replacements to bring the Y-Force up to strength, plus extra rations to meet the problem of malnutrition. The Chinese leader's cordiality was marked.18 It
extended to Stilwell's suggestions for the Chinese proposals to be offered at SEXTANT. Possibly Stilwell hoped that if the Chinese leader offered such a program to the President and the Prime Minister, the Generalissimo himself would be obliged to adhere to it. And, faithful to the "bargaining" policy that he always wanted to follow, Stilwell spelled out what China should expect of her Allies if she did her part.
MEMORANDUM: His Excellency, Generalissimo Chiang Kai-shek
PROPOSALS FOR COMING CONFERENCE
The Generalissimo's program is to bring up to effective strength, equip, and train 90 combat divisions, in 3 groups of thirty each, and 1 or 2 armored divisions.
The first group consists of the divisions in India, and those assigned to the Y-force in Yunnan Province. These divisions should be at full strength by January 1, and by that date satisfactorily equipped. . . .
The second group of thirty divisions has been designated [note that these are suggested proposals to be adopted by the Generalissimo, not a recital of accomplished facts] and a school has been set up. . . . With a road to India open, [the second thirty divisions] should be re-equipped and ready for the field in August of 1944.
A similar process will be followed with the third group of 30 divisions with target date of January 1, 1945. After the reopening of communications through Burma, 1 or 2 armored divisions will be organized.
All resources available in China will be used to produce effective combat units. Trained men of existing units will be made available as fillers.
China will participate according to the agreed plan in the recapture of Burma by attacks from Ledo with the X-force [Ledo force] and from Paoshan with the Yunnan force. This operation will be supported by naval action in the Bay of Bengal. Before the operation, British naval forces should be concentrated in time and fully prepared for action.
The training program will be followed and intensified.
Necessary airfields will be built and maintained.
In the event that communications are reopened through Burma and necessary equipment is supplied, an operation will be conducted to seize the Canton-Hongkong area and open communication by sea.
The Generalissimo expects that:
Before the 1944 rainy season an all-out effort will be made by the Allies to re-open communications through Burma to China, using land, air, and naval forces.
The U.S.A. will supply the equipment for the three groups of 30 divisions, and the armored divisions.
The Fourteenth U.S. Air Force will be maintained as agreed and supplied sufficiently to allow of sustained operations.
The Chinese Air Force will be built up promptly to 2 groups of fighters, 1 group of medium bombers, 1 reconnaissance squadron, and 1 transport squadron, and maintained at that strength. By August of 1944 a third group of fighters, and a group of heavy bombardment will be added and maintained thereafter.
Following the seizure of the Canton-Hongkong area, the U.S. will put 10 infantry divisions, 3 armored divisions and appropriate auxiliary units into South China for operations against Central and North China. Contingent upon this allocation of troops, the Generalissimo will appoint American command of those units of the combined U.S. Chinese [sic] forces which are designated in the order of battle, under his general direction.
The U.S. will, at the earliest practicable time, put long-range bombing units in China to operate against the Japanese mainland.
The ferry route will be maintained at a capacity of at least 10,000 tons a month.
Training personnel will be supplied as required.
Medical personnel will be supplied for the second and third groups of divisions.
For the Generalissimo,
JOSEPH W. STILWELL,
Joint Chief of Staff for Generalissimo.19
Stilwell thus proposed that the Generalissimo ask the United States to train and equip no less than ninety Chinese divisions. So imposing a force would dominate Asia south of the Amur River. Only the Red Army in Siberia could have faced it, and even then, the issue would have been uncertain. The Generalissimo was apparently favorably impressed by Stilwell's suggestions, for many of them were offered on behalf of China at the SEXTANT Conference.20
Confirming the Generalissimo's cordiality, Madame Chiang telephoned Stilwell that night. She told the American general that the Generalissimo was "not only pleased but happy," over his conference with Stilwell.21 On 7 November Stilwell saw the Chinese Chief of Staff, Gen. Ho Ying-chin, who was not encouraging about replacements, but presumably General Ho had not yet received orders from the Generalissimo.22
Four days later, on 11 November, General Stilwell, General Hearn, and Col. John E. McCammon, G-3, Chungking, met with General Ho and two of his staff at the Chinese National Military Council to receive the Generalissimo's formal answer to Stilwell's 5 November memorandum. The National Military Council agreed to a converging attack on Burma by British and Chinese troops but desired to hold their own advance until the British were actually attacking Kalewa in Burma. On replacements, the Chinese said that 35,000 were en route to Yunnan. In addition, 54,000 more men would be sent. To move them, the Chinese would need motor fuel, which Stilwell promptly undertook to furnish. The Chinese agreed to provide more food for the Y-Force. Their medical needs were presented. The questions of interpreters, spare parts, artillery horses, and 7.92-mm. ammunition were all presented affirmatively and solutions speedily agreed on by both sides.23 Simultaneously with these conferences on military matters Stilwell found time to talk with General Hurley, now in Chungking on behalf of the President to arrange for the Generalissimo's visit to Cairo. General Hurley made an excellent impression on Stilwell, who enjoyed Hurley's anecdotes and his comments on Allied powers and personages. For his part,
Hurley liked the outspoken, acidly witty Stilwell, and the two men got on very well.24 In speaking to the Generalissimo, Hurley gave a brief review of U.S. policy, which included "belief" in a "free, strong, democratic China, predominant in Asia."25
Thus, on the eve of SEXTANT the opportunity of creating an effective Sino-American effort in Asia seemed to exist. In October Stilwell's diaries showed the utmost skepticism about the Generalissimo's desire to reform his Army and use it aggressively against the Japanese. But now the Generalissimo was again receiving Stilwell's views, he was considering them favorably, and he was overruling his subordinates and ordering them to take action, a changed attitude which can be seen in the great difference of General Ho's expressions of 7 November from those of 11 November. For his part, as the marginal notes on the 11 November minutes show, Stilwell was meeting every Chinese proposal and promise with appropriate orders to his own people. If this atmosphere persisted, Stilwell and the War Department might be moved to re-examine their conclusions of October and November 1943.
The issue of Sino-American relations was about to move out of Stilwell's hands into those of his superiors, the President, the Prime Minister, and the Generalissimo. At SEXTANT it would be up to the United States and the British Commonwealth to abide by the pledge of TRIDENT that nothing would be left undone to relieve the siege of China. If the President and the Prime Minister made good on the plans for a major Allied operation in Burma, Sino-American co-operation could flourish. If, however, the Generalissimo was given reason to be dissatisfied with what he received from the President and the Prime Minister, then Stilwell's position would be compromised. If the bases of Sino-American co-operation were not present, Stilwell's personal efforts could do little to remedy the situation.
Presenting CHAMPION at Cairo
With General Hurley in Chungking, the myriad details attendant on the flight to Cairo of the Generalissimo, Madame Chiang, and their entourage were speedily worked out. It was agreed among the powers that Mr. Roosevelt and Mr. Churchill would meet the Chinese leader in late November and then confer with Marshal Stalin in Tehran, Iran. The Combined Chiefs of Staff would be present and so would Admiral Mountbatten and Generals Stilwell, Chennault, and Wedemeyer.
Stilwell arrived at Cairo on 20 November. The following day he was able to see General Marshall in company with General Hurley and General Somervell
CAIRO CONFERENCE participants were, left to right front, Generalissimo Chiang Kai-shek, President Roosevelt, Prime Minister Churchill, and Madame Chiang, and, standing left to right, Gen, Shang Chen, Lt. Gen. Lin Wei, General Somervell, General Stilwell, General Arnold, Field Marshall Sir John Dill, Admiral Mountbatten, and Maj. Gen. Adrian Carton de Wiart.
of Army Service Forces. Stilwell was anxious to raise many points with Marshall, presumably before the conferences began. His notebook records them:
Min[ister] of War. (replace [Gen Ho Ying-chin]). U.S. command after Pacific port [is opened]. 90 divisions. Offensive-defensive alliance. SEAC ambitions [to absorb CBI Theater]. Mountbatten wants me out. U.S. command of U.S. units. After CHAMPION? Future [of] CBI.
Louis [Mountbatten]: (1) Wants authority over ATC so as to "protect" it; (2) Wants China air plans for '44 and '45; (3) Wants responsibility for operation of Burma Road; (4) Liaison with Miles [U.S. Naval Observer Group in China]; (5) De Wiart [British liaison to Generalissimo] in our hqs; (6) Liaison offs [officers] with Chinese [divisions]; (7) Wants to absorb Rear Echelon; (8) Squadron of Spitfires to China; (9) Air staff mission; (10) Medical mission.
Claims GCM [Marshall] and Arnold told him to integrate [the Anglo-American air forces in India].
The plan for CHAMPION: Piece meal; indefinite objective; Indaw abortion. No problem.
UTOPIA [seizure of Andaman Islands] abortion, no bearing; leaves Chinese to hold sack; no British troops--unreliable Indian troops.26
Whether Stilwell presented these points at one session, or how Marshall reacted to them, is unknown. In his talk with Marshall, Hurley, and Somervell, Stilwell was warned that the President highly disapproved of his disrespectful references to the Generalissimo.27
The first plenary session of SEXTANT was set for 1100, 23 November 1943. The Joint Chiefs of Staff met briefly with Stilwell and Wedemeyer before the plenary session to receive their comments on CHAMPION, SEAC's plan for Burma. No attempt was made to weigh the plan of CHAMPION, which had been adopted over Stilwell's objections. Of the airborne operation, he remarked that he saw no point in cutting Hump tonnage just to drop a division in the jungle during the rains. Stilwell did not think the Japanese line of communications to Myitkyina a vital one and did not want it blocked at the expense of Hump tonnage (which would embarrass his relations with the Generalissimo and Chennault). However, Stilwell pledged that once CHAMPION began, he would do his best to carry it out. Wedemeyer commented that while CHAMPION did provide attacks on all key points, he did not particularly care for the Arakan situation, in which two divisions plus two brigades were given only the most limited objectives, for he mistakenly believed they faced but two Japanese regiments. Actually, the Japanese 54th Division was then moving up to join the 55th in the Arakan.
Stilwell's comments prefaced his presentation to the Joint Chiefs of Staff of the Generalissimo's and his proposals for China Theater, based on Stilwell's paper, Proposals for the Coming Conference. The Generalissimo called for occupation of north Burma, intensive training of the Chinese Army, and improvement of the line of communications to China. He desired B-29 operations from China Theater in early 1944, air attacks in the Formosa-Luzon area in October 1944 to support U.S. naval operations in that area, the taking of Canton and Hong Kong in November 1944-May 1945, and an attack on Formosa from Chinese ports, if required. The paper was most significant because it had the Generalissimo's approval. This was, so Marshall said, the first time since the war began that the Generalissimo had shown an active interest in the improvement and employment of his Army. General Marshall and Admiral Ernest J. King, U.S. Chief of Naval Operations, thought this attitude extremely important and not to be discouraged if at all possible.28 After this session
closed, the American service chiefs joined their colleagues and superiors for the plenary session.
Admiral Mountbatten had expected CHAMPION to be first presented to the British Chiefs of Staff by himself, and to the Joint Chiefs of Staff by General Wedemeyer. On their approving it, CHAMPION would go to the Combined Chiefs of Staff, and if they concurred, be presented to the Generalissimo, the President, and the Prime Minister as an agreed-on CCS proposal. This was the usual practice in such cases, but at SEXTANT it was reversed. The Generalissimo was present, though unfortunately for security reasons his arrival was not announced in advance, so neither the President nor the Prime Minister had been at the airport to greet him and Madame Chiang. This was a blow to Chinese pride.
Because the Generalissimo was at hand, and because Roosevelt and Churchill wanted him to enter immediately into military discussions, the SEAC plan was laid before the Generalissimo at once, and therefore without its having been considered by the CCS. Thus, the Generalissimo was being asked to approve CHAMPION in advance of its approval by the Allies.
As presented formally to the three Allied statesmen, to Harry L. Hopkins, Madame Chiang, and the highest service advisers, CHAMPION's first phase called for the advance of the Chinese 22d and 38th Divisions from Ledo, an operation then under way. In mid-January 1944, 15 Corps would move forward in the Arakan to take up an improved line, and would exploit any success that might be gained. At the same time 4 Corps would advance on Mawlaik, Minthami, and Sittaung, driving southeast as far as possible. In February 1944 three long-range penetration groups would attack. Paratroops would seize Indaw in mid-March after which the 26th Indian Division would fly in to hold it.29 A major amphibious operation would be staged in the Bay of Bengal. For security reasons, the amphibious operation was not further described to the Chinese. As for weather, Mountbatten hoped to end his advance by early April when the monsoon rains would break. During the monsoon, the long-range penetration groups would operate, and if the CCS gave the needed resources, the advance would resume after the monsoon's end. The rains were expected to prevent a Japanese reaction.30
The Chinese, apprised of CHAMPION weeks before by Stilwell, were immediately critical. The Generalissimo did not believe that 15 and 4 Corps were intended to advance far enough into Burma; he wanted them to drive on Mandalay. He insisted that the advance must be synchronized with a naval operation. But the Generalissimo's argument for a naval operation was now affected by a sovereign fact which he disregarded. The Japanese were known by the SEXTANT conferees to have completed a railway from Thailand to Burma which
made them independent of imports through Rangoon. The Generalissimo also insisted that whatever the demands of Burma operations the Hump lift must not fall below 10,000 tons a month. A day later Chennault gave the monthly requirements of the Chinese Air Force and the Fourteenth Air Force, 10,000 tons a month. Asked by General Arnold what that would leave for the Chinese Army in China, which had a major role to play in the reconquest of Burma, General Chennault simply replied that 10,000 tons was what he needed.31
Trying To Reach Agreement
These viewpoints having all been expressed, the conferees had two delicate tasks to handle simultaneously: to settle on a plan and to secure the Generalissimo's assent to it. Reversing the usual process by which plans were approved, in order to spare the Generalissimo's feelings, was leading into ever more tangled thickets. Mountbatten was sent to the Generalissimo's villa to explain that if the offensive toward Mandalay which the Chinese leader desired was carried out, it would entail diversion of all Hump tonnage. "Welcome change from telling me to fix it up," wrote Stilwell.32
As Admiral Mountbatten tried to explain the situation, the Generalissimo grew enthusiastic and announced he would press for both an airborne assault on Mandalay and 10,000 tons a month over the Hump, which would require an added 535 transports sent to India. Mountbatten finally escaped by saying that he would lay the Generalissimo's wishes before the CCS to see if they could find the 535 transports, which Mountbatten knew were nowhere to be had. The CCS formally stated that the 535 aircraft could not be found, and in view of the uncertainty surrounding the Generalissimo's attitude, Mountbatten was asked to obtain his formal agreement to go back into Burma.33
While Mountbatten, aided by Churchill, was essaying this task, Stilwell went with Marshall on 25 November to confer with the President. Before the interview Stilwell noted what he wanted to say to the President about the problems that faced him in China: "Ask FDR: Field chief of staff [to Generalissimo], can [have]: (1) Man power; (2) Executive authority; (3) U.S. troops; (4) Chinese-American command. Keep X-force, add one corps [as a force directly under Stilwell's command]"34
Preparing for his interview with the President, Stilwell sketched a point he wanted to make:
No matter what PEANUT agrees to, if something is not done about the Chinese high command the effort is wasted. I suggest stipulation of U.S. command, with real executive authority. If impossible over large group, then over composite Chinese-American corps. Lack of real power and control of Gmo. He will order. Kan pu will block. Suggest new Minister of War or thorough re-organization of [Chinese] War Department. Or American take over the first 30 complete and operate them [Stilwell's italics].35
Stilwell and Marshall entered the President's room, and Stilwell began his presentation. The President seemed to hear him with "little attention" and in the middle of Stilwell's report broke in to talk about the Andaman Islands, on which he wanted to put some heavy bombers. Trying to bring the discussion back to China's problems, Stilwell pled for some U.S. combat troops in CBI. In reply, the President offered to put a brigade of U.S. Marines in Chungking. "Marines are well known," said the President. "They've been all over China, to Peking and Shanghai and everywhere. The Army has only been in Tientsin."
Stilwell told the President that the Chinese had reneged on their agreements, that to carry out his mission he needed more power and executive authority over Chinese troops. Stilwell also dwelt on the "basic factors of our presence" in China, that is, the Chinese were to supply the men while the Americans supplied weapons and training. The President, though promising to speak to the Generalissimo at once on these points, seemed to show little interest.36
The President's attitude depressed Stilwell, but the conference was not all negative. Mr. Roosevelt stated that the Generalissimo had agreed to CHAMPION.37 Then came bigger news. An American corps was out of the question, but the Chinese could have equipment for ninety divisions and could help occupy Japan. At the JCS meeting that day General Marshall had remarked that there was pressure on the President to give the Generalissimo something to show as a result of his trip, that the President had been spoken to about arming the third thirty divisions but had postponed any definite commitment, though Roosevelt had made it clear the United States intended eventually to equip ninety Chinese divisions.38 Now the President told the Generalissimo's joint chief of staff of the ninety-division intention, and Stilwell duly listed it among the "Cairo results."39
Returning to his quarters, Stilwell took the notes he had prepared for his
talk with the President, drew a line diagonally across the page and wrote above them: "NB: FDR is not interested."40
While Stilwell was preparing to meet with the President, Mountbatten and the Prime Minister attempted to secure a firm assent from the Generalissimo to CHAMPION. Initially, as the President told Stilwell, they succeeded. On the early afternoon of 25 November the Generalissimo agreed to go into Burma on two conditions: that the Royal Navy's Eastern Fleet command the Bay of Bengal, and that an amphibious operation be mounted there. That evening the Generalissimo met again with the President and reversed himself on every point.
Mountbatten was again sent into action to restore the situation but found the Generalissimo obdurate. So Mountbatten turned to Churchill, had lunch with him, and the Prime Minister agreed that he with the President and Madame Chiang would try to bring the Generalissimo round. The Allied leaders met the afternoon of the 26th at tea, unfortunately with neither secretaries nor minutes. After tea the Prime Minister and Madame Chiang separately told Mountbatten that the Generalissimo had agreed on every point. Such was the situation when Churchill and Roosevelt with their key advisers departed for Tehran, and the Generalissimo prepared to go to Chungking. For the first time in the war, the Prime Minister, the President, and the CCS had met the Generalissimo and endeavored to secure a binding agreement from him. "They have been driven absolutely mad," wrote Admiral Mountbatten, "and I shall certainly get far more sympathy from the former in the future."41
With the dignitaries out of the way, Admiral Mountbatten called a meeting of the SEAC delegation on 27 November to clear up the loose ends. He felt "staggered" when Stilwell came in to tell him that just before departing that morning the Generalissimo had reversed himself again, rejected all his previous agreements, and ordered Stilwell as the latter put it, to "stay and protest. I am to stick out for TOREADOR [the airborne assault on Mandalay] and 10,000 tons [a month over the Hump]."42 Mountbatten thought quickly. He had arranged to inspect the Ramgarh Training Center together with the Generalissimo in a few days and believed that if he had the elusive Chinese leader to himself for a few minutes he might succeed in getting a binding agreement from him. So he became diplomatically deaf, told Stilwell he had not understood him, and asked that a radio be sent to him at New Delhi.43
Summing up the SEXTANT Conference at that point, Stilwell asked himself: "So where are we? TARZAN? Tonnage? Command? Sure on equipment for 90 divisions. . . ."44
Thus, of the two delicate and simultaneous operations, the agreement and the plan, one had not been brought off. Nor was there agreement between the President, the Prime Minister, the CCS, and the JCS on future operations in SEAC. Churchill early indicated his attitude by telling Admiral Mountbatten on 21 November that he meant to have a landing on Sumatra or nothing, that if there was no such operation, he would take away SEAC's landing craft for an operation against the island of Rhodes in the Mediterranean.45 A few days later Marshall remarked that Roosevelt had expressed his opposition to any diversion of Royal Navy landing craft from BUCCANEER (now the code name for the Andamans operation which was to meet the Generalissimo's long-standing demand for an amphibious operation). This expression also was the view of the Joint Chiefs, who were strong for BUCCANEER. In a conference at Tehran between the President and the JCS, it was observed that the British would do all they could to cancel BUCCANEER for an operation against Rhodes. The President quickly replied that the Allies were obligated to the Chinese to stage BUCCANEER, an attitude which suggests that he was unaware of the Generalissimo's final reversal. However, at the first CCS session at Tehran the British Chiefs of Staff urged the abandonment of BUCCANEER, and it remained to be seen whose view would prevail.46
While the President was at Tehran, the Cairo Declaration was issued by the President and the Prime Minister as a joint pronouncement of the United States, the British Commonwealth, and China. In sharp contrast to the actual course of events at SEXTANT, the declaration read: "The several military missions have agreed upon future military operations against Japan. The Three Great Allies expressed their resolve to bring unrelenting pressure against their brutal enemies by land, and sea. This pressure is already rising." The declaration went on to pledge the return of Manchuria, Formosa, and the Pescadores to China, and that Korea should be free and independent. It then concluded: "With these objects in view, the three Allies, in harmony with those of the United Nations at war with Japan, will continue to persevere in the serious and prolonged operations necessary to procure the unconditional surrender of Japan."47
While the President and Prime Minister were meeting with Marshal Stalin at Tehran, the Generalissimo again changed his mind about Burma operations. While inspecting the Chinese New First Army at Ramgarh on 30 November, he again agreed to join in CHAMPION. He confirmed his resolve in a speech to the Chinese soldiers, placing them under Mountbatten and Stilwell for the coming operations.
CHIANG KAI-SHEK AT RAMGARH. Accompanied by Madame Chiang and Admiral Mountbatten the Generalissimo inspects the Chinese New First Army.
I feel greatly inspired today as I am here with you, officers and men, at this post. Being able to speak to you in a friendly land, is indeed, a rare opportunity. You must pay full attention to every word I say and bear it firmly in your mind. It shall serve as a moral encouragement for your endeavor to glorify our nation by adding a glorious page to the history of our national army. Now that our National Army is enabled to come over to India as a combined combat strength with our worthy allies, [it?] has already registered an illustrious page in our national annals.
It is also your good fortune that you are placed under the joint command of Admiral Lord Louis Mountbatten and General Joseph W. Stilwell, respectively supreme commander and deputy supreme commander of S.E. Asia Command. My expectation of the New First Army is for you to accomplish this worthy mission. My meeting with you here today is just like a family reunion which imparts profound attachment to both father and sons. It is therefore your duty to listen to my words as follows [here, the Generalissimo encouraged his troops to fight well for China]. I exhort you to keep my words. Unitedly under the joint command of Admiral Lord Mountbatten and General Stilwell you shall destroy the enemy. . . .48
Over the Watershed: The Changed Attitude Toward China
At Tehran the President met Marshal Stalin for the first time. Explaining his China strategy, the President spoke of converging attacks on north Burma,
and of amphibious operations in the Bay of Bengal. The goal was to open the road to China and supply China so that it would stay in the war and, also, to put the Allies in a position to bomb Japan from Chinese bases. Marshal Stalin expressed no opposition to this, and, indeed, repeated his earlier promises to enter the war against Japan.49
After meeting and conferring with Marshal Stalin, the President, in the opinion of Robert E. Sherwood, arrived at certain conclusions with regard to the Soviet Union and its leader:
Roosevelt now felt sure that, to use his own term, Stalin was "getatable," despite his bludgeoning tactics and his attitude of cynicism toward such matters as the rights of small nations, and that when Russia could be convinced that her legitimate claims and requirements--such as the right to access to warm water ports--were to be given full recognition, she would prove tractable and co-operative in maintaining the peace of the postwar world.
If, therefore, good relations could be established with the Soviet Union, all the pieces of the postwar puzzle would fall into place. In the immediate present there was no doubt of what the Soviet Union wanted--a cross-Channel assault (OVERLORD) and a landing on the coast of southern France (ANVIL) as soon as possible and on as big a scale as possible.50 The President, therefore, would weigh operations in Southeast Asia in an atmosphere very different from that of the first conferences in Cairo a few days before. Such was the situation when the President, the Prime Minister, and the Combined Chiefs of Staff finished at Tehran and returned to Cairo.
Mr. Churchill and the British Chiefs of Staff immediately attacked BUCCANEER. Churchill took the Stalin promise to enter the war with Japan as a stunning surprise that changed the whole strategic picture. He called it a decisive event. Soviet entry in the Pacific war would give the Allies better bases than China ever could. In the light of Stalin's promise, operations in Southeast Asia lost much of their value. In this connection, he was astounded by SEAC's requirements for BUCCANEER, which he understood to be 58,000 men to oppose 5,000 Japanese. The other decisive event, said Churchill, was setting the date for OVERLORD. Nothing anywhere should interfere with that great operation. The proper course, the Prime Minister argued, was to cancel BUCCANEER and use its landing craft to reinforce the amphibious assault on southern France, ANVIL.51
The Prime Minister's pleased surprise at Marshal Stalin's promise to enter the Pacific war and his argument that because of it the strategic picture in the Pacific had changed since the first Cairo meetings were difficult to reconcile with the circumstance that the Soviet Union originally promised to enter the
Pacific war in October 1943 at the Moscow Conference and repeated its promises in November.52
At some point during these post-Tehran discussions of BUCCANEER, a radio from General Boatner in north Burma to theater headquarters in New Delhi, detailing at length the command problems he had met with the Chinese, arrived at Cairo. By mischance, it had been so forwarded, and was then delivered to the SEAC delegation. Circulated as an admission by Stilwell's own headquarters that even U.S.-trained Chinese troops were unreliable the radio was a telling argument against any campaign that depended on the Chinese in any capacity.53
Mr. Roosevelt with Admiral King and Admiral William D. Leahy, the President's Chief of Staff, held that there was a definite commitment to the Generalissimo, and that a whole train of unhappy consequences might follow if China's allies broke their promise. He had a moral obligation to the Chinese, Roosevelt remarked, and could not forego the operation without a great and readily apparent reason. There the 4 December session ended, with a directive from the President and Prime Minister to the Combined Chiefs to try to find agreement on that basis.54 The JCS met at 0900 on 5 December and found themselves still in accord on the need to execute BUCCANEER.
The Combined Chiefs met at 1030. General Marshall drew attention to a new strategic factor which had arisen since TRIDENT. The blast of world-wide publicity following SEAC's creation had attracted heavy Japanese reinforcements to Burma which would seize the initiative unless the Allies struck first. Marshall feared that such a Japanese offensive would imperil the Hump route. If it would be possible to abandon BUCCANEER and still carry out the North Burma Campaign, Marshall would not be seriously disturbed, but he did not think there would be a Burma campaign unless there was an amphibious operation. Admiral Leahy remarked briefly that canceling the amphibious operation meant either the failure or the abandonment of the Burma campaign.
The Chief of the Imperial General Staff, General Brooke, and Air Chief Marshal Sir Charles Portal repeated the arguments that BUCCANEER was a diversion from the main effort in Europe and that the Chinese contribution was a negligible factor. They also noted that the main effort against Japan was now to be made in the Pacific, which was inconsistent with a heavy allocation of resources to Burma. The meeting ended with a decision to present the various points in dispute to the President and the Prime Minister.55
Mr. Roosevelt opened the plenary session by pointing out that BUCCANEER was the dividing issue between the staffs. He acknowledged that the Generalissimo
had left Cairo believing an amphibious operation would be carried out with TARZAN, the India-based portion of CHAMPION. The President was dubious about staking everything on Russian good will, for he feared that the Allies might sacrifice the esteem of the Chinese without later securing the aid of the Russians. Admiral King rebutted the argument that BUCCANEER had to be canceled to secure landing craft for ANVIL by stating that a two-division lift for ANVIL was in sight and might even be improved upon. This, he went on, would entail keeping back four months' production from the Pacific.
Though the intimate connection between BUCCANEER and Chinese participation in Burma operations was admitted by all, it was quite clear that many of those present hoped the Generalissimo would perform his share of the bargain even though his Allies reneged on theirs. The British were adamant in opposing BUCCANEER as a diversion from OVERLORD, and Churchill made it clear that he felt no obligations to the Chinese. The meeting ended with an agreement to inquire of SEAC what it could do if the bulk of its landing craft were taken away.56
So questioned, SEAC quickly replied that canceling BUCCANEER would, in the light of the Generalissimo's known attitude, lead to the collapse of TARZAN. In its stead SEAC suggested overland operations from Imphal toward Kalewa and Kalemyo in Burma (which if successful would be a long step toward Mandalay), continuation of the advance from Ledo, continuation of the current operations in the Arakan, and an assault by the long-range penetration groups at the proper time. SEAC acknowledged that this operation would not open the land route to China.57 Admiral Leahy described SEAC's estimate of 50,000 men for BUCCANEER as excessive, but General Wedemeyer replied that a smashing victory was needed to restore the morale of SEAC's troops and added that all the resources needed for BUCCANEER, except an added 120 carrier-based fighters, were in sight. Admiral King immediately said that he might find four or six escort carriers to fill the gap. But there was still no agreement on BUCCANEER, and the case went back to the President and the Prime Minister.58
On the night of 5 December Mr. Roosevelt accepted Mr. Churchill's arguments and withdrew his support from BUCCANEER. In abandoning BUCCANEER, the President overrode the very strongest protests of his service advisers. In his memoirs, Admiral Leahy wrote:
I felt that we were taking a grave risk. Chiang might drop out of the war. He never had indicated much faith in British intentions, but had relied on the United States. If the Chinese quit, the tasks of MacArthur and Nimitz in the Pacific, already difficult, would be much harder. Japanese man power in great numbers would be released to oppose our advance toward the mainland of Japan. Fortunately for us, the courageous Chinese stayed in the fight.
After the war, in writing his memoirs, Admiral King remarked that he had been "distressed" by the breach of the long-standing promise to the Chinese, and added that in his opinion this was the only time during the war when the President had overruled the Joint Chiefs.59
After agreeing to cancel BUCCANEER, the President and Hopkins drafted a radio to the Generalissimo telling him the bad news. The message was based on SEAC's estimate that there could be no major amphibious operation if BUCCANEER was canceled. The estimate was in error, as SEAC soon discovered, but the two U.S. leaders naturally accepted it, and, consulting Churchill but not the CCS, told the Generalissimo there could be no successful amphibious operation simultaneously with TARZAN. They asked him if he would go ahead without the amphibious operation (it will be recalled that the Chinese had never been told exactly what sort of operation was contemplated), or would he wait until November 1944 when there might be a major seaborne landing? In the meantime, the President suggested, all air transport would be concentrated on increasing the tonnage flown to China. Roosevelt and Hopkins held out the "fair prospect of terminating the war with Germany by the end of summer of 1944," which would release great resources for the Far East. (on the night of 6 December a poll of the CCS revealed that the earliest date any of them would set for the end of the war in Europe was February 1945, with half of them guessing it would be spring 1945).60
Stilwell's Search for Guidance
On 6 December Stilwell and his political adviser, John P. Davies, Jr., met with the President and Hopkins. Stilwell had heard of unfavorable developments and was anxious to know what effect they would have on U.S. policy in China. Thanks to the rapprochement with the Generalissimo in October, the American soldier was still joint chief of staff for China Theater, was commanding two divisions of Chinese troops in India and Burma (one of them engaged in combat), and was commanding general of the U.S. China, Burma and India Theater. The President's radio could be expected to shock the Generalissimo, and guidance for Stilwell in the radically changed situation was essential.
For two years the President's declared policy had been to treat China as a Great Power and make of her a partner in a coalition with Britain, Russia, and the United States. In the course of this period the President had deferred continually to the Generalissimo's wishes, sometimes against the advice of his
service chiefs. Thus, in March 1943, and again in May 1943, he had overruled them to back General Chennault, explaining his decision by the desperate urgency of China's need, and the necessity of acknowledging the wishes of the Generalissimo as Supreme Commander, China Theater.
The President had insisted on China's joining in the diplomatic councils of the Great Powers and had carried his point just ten weeks before at Moscow. In the course of the previous two years the United States had made a number of commitments to China, of which the chief was that of TRIDENT, to break the blockade of China at the earliest moment. Roosevelt had been a driving force in these developments and had often expressed his appreciation of the urgent character of China's needs.
Casablanca, TRIDENT, QUADRANT, had erected an imposing structure of plans and decisions; an entire new Allied theater, SEAC, under an aggressive commander, had been created. All these efforts had seemed to be building to a grand climax, CHAMPION, the culmination of these diplomatic and strategic efforts. CHAMPION would break the blockade of China, with all the momentous ensuing consequences.
Now, the situation was changed, in a dramatic reversal, and it was essential that Stilwell know how the President wanted to meet the situation. The President explained that the conference had come to an impasse and could not be permitted to end in disagreement. Therefore, he would yield to the British point of view. The United States and Russia had insisted on OVERLORD, and so, said the President, Churchill had insisted on giving up TARZAN.61
So much was clear, and Stilwell asked: "I am interested to know how this affects our policy in China." The President's reply was most indefinite. In retrospect, it appears that he had not decided what to do about China, and so Stilwell could not keep the conversation away from Roosevelt family history, the postwar development of China, and the new, postwar Asia. Stilwell and Davies prepared minutes of the conversation, and from them, Stilwell tried to puzzle out just what the President wanted him to do.62
Stilwell concluded that the President's policy was: "Keep China in the war. We must retain our flank position [vis-à-vis Japan]. If CKS flops, back somebody else."63
But how was all this to be done in the face of BUCCANEER's cancellation and the inevitable compromising of Stilwell's position? "Only remarks pertinent to question," wrote Stilwell, were "If TARZAN is out, we can boost the [Hump] tonnage. VLR bombers [B-29's] can bomb Japan."64 Several months later
Stilwell told Marshall that he had sought guidance at Cairo but had found none.65 Marshall did not challenge this statement.
Indeed, the President's remarks raised more questions than they answered. If, under Japanese attack, or economic distress, the Nationalist regime began to crumble, then, according to the President, the United States would "look for some other man or group of men, to carry on."66 Whom did the President have in mind, a dissident war lord like Marshal Li Chi-shen or the Communists? At what point was Stilwell to begin dealing with such people?
Knowing that Stilwell's position in China would be almost impossible after SEXTANT, Marshall offered him a high post in another theater. Stilwell declined it.67 Talking with Marshall the day after his interview with the President, Stilwell learned: "George hopeful about Germany. 'Hang on and keep going.' Nothing else he could tell me. Everything dangling."68
One thing, arms for ninety divisions, might have kept the Generalissimo from regarding SEXTANT as an utter disappointment. On 10 December Stilwell attended a meeting to discuss the project.69 Three weeks later, after Stilwell returned to CBI, Marshall was told by OPD: "The commitment regarding the Lend-Lease equipping of Chinese divisions the President actually made at SEXTANT is not known. We are proceeding on the assumption the President made no commitment on the timing of the flow of equipment."70 Stilwell was informed accordingly. As for the landing craft that on Churchill's insistence were taken from SEAC to reinforce ANVIL, several weeks later the British Chiefs of Staff, supported by the Prime Minister, made the first of several attempts to have ANVIL canceled for operations elsewhere in the Mediterranean.71
The Generalissimo's answer to the President's radio telling him of BUCCANEER's cancellation was awaited anxiously, for SEAC could have no CCS directive on amphibious operations until it was known how the Chinese would react to the disappointment. Discussion of future operations continued while the CCS awaited his reply. General Marshall suggested that the land operations outlined by SEAC might well be undertaken by the Chinese advancing from Yunnan and screened by the U.S. long-range penetration groups directed at Quebec, with some of the troops released by BUCCANEER forming a reserve. The Chief of the Imperial General Staff countered with the proposal that Mountbatten's new mission should be to guard Assam by active offensive operations.72
Meanwhile, Stilwell sent a radio to General Hearn in Chungking, ordering Hearn to see the Generalissimo and urge him to go ahead with his share of the campaign, regardless of BUCCANEER's cancellation.73 This action was consistent with Stilwell's often expressed view that seizure of the Andaman Islands contributed nothing to operations in Burma. His reasoning was supported by the facts that the Japanese had opened a railway to Thailand, so that they no longer depended on the port of Rangoon, and that airfields on the Andamans were only 100 miles closer to Rangoon than those already in Allied hands, so that their possession would not be decisive in air operations against Rangoon, even if such were of vital importance.
When the Generalissimo's answer to the President arrived at Cairo on 9 December, it spoke in ominous tones:
I have received your telegram of December Sixth. Upon my return I asked Madame Chiang to inform you of the gratifying effect the communique of the Cairo Conference has had on the Chinese army and people in uplifting their morale to continue active resistance against Japan. This letter is on the way and is being brought to you by the pilot, Captain Shelton.
First, prior to the Cairo Conference there had been disturbing elements voicing their discontent and uncertainty of America and Great Britain's attitude in waging a global war and at the same time leaving China to shift as best she could against our common enemy. At one stroke the Cairo communique decisively swept away this suspicion in that we three had jointly and publicly pledged to launch a joint all-out offensive in the Pacific.
Second, if it should now be known to the Chinese army and people that a radical change of policy and strategy is being contemplated, the repercussions would be so disheartening that I fear of the consequences of China's inability to hold out much longer.
Third, I am aware and appreciate your being influenced by the probable tremendous advantages to be reaped by China as well as by the United Nations as a whole in speedily defeating Germany first. For the victory of one theater of war necessarily affects all other theaters; on the other hand, the collapse of the China theater would have equally grave consequences on the global war. I have therefore come to this conclusion that in order to save this grave situation, I am inclined to accept your recommendation. You will doubtless realize that in so doing my task in rallying the nation to continue resistance is being made infinitely more difficult.
Because the danger to the China theater lies not only in the inferiority of our military strength, but also, and more especially, in our critical economic condition which may seriously affect the morale of the army and people, and cause at any moment a sudden collapse of the entire front. Judging from the present critical situation, military as well as economic, it would be impossible for us to hold on for six months, and a fortiori to wait till November 1944. In my last conversation with you I stated that China's economic situation was more critical than the military. The only seeming solution is to assure the Chinese people and army of your sincere concern in the China theater of war by assisting China to hold on with a billion gold dollar loan to strengthen her economic front and relieve her dire economic needs. Simultaneously, in order to prove our resolute determination to bring relentless pressure on Japan, the Chinese air force and the American air force stationed in China should be increased, as from next spring, by at least double the number of aircraft already agreed upon, and the total of air transportation should be increased, as from February of next year, to at least 20,000 tons a month to make effective the operation of the additional planes.
In this way it might be possible to bring relief to our economic condition for the coming year, and to maintain the morale of the army and the people who would be greatly encouraged by America's timely assistance. What I have suggested is, I believe, the only way of remedying the drawbacks of the strategy concerning the China and Pacific theaters. I am sure you will appreciate my difficult position and give me the necessary assistance. I have instructed General Stilwell to return immediately to Chungking and I shall discuss with him regarding the details of the proposed changed plan and shall let you know of my decision as to which one of your suggestions is the more feasible.
From the declaration of the Teheran Conference Japan will rightly deduce that practically the entire weight of the United Nations' forces will be applied to the European front thus abandoning the China theater to the mercy of Japan's mechanized air and land forces. It would be strategic on Japan's part to (3) liquidate the China Affair during the coming year. It may therefore be expected that the Japanese will before long launch an all-out offensive against China so as to remove the threat to their rear, and thus re-capture the militarists' waning popularity and bolster their fighting morale in the Pacific. This is the problem which I have to face. Knowing that you are a realist, and as your loyal colleague, I feel constrained to acquaint you with the above facts. Awaiting an early reply,
The Generalissimo's requests were not enough to bring agreement on a new directive to SEAC for a major amphibious operation. For the time being SEAC and Stilwell would have to be governed by the SEXTANT decisions, which were sufficiently explicit. These ordered the occupation of upper Burma in spring 1944 (1) to improve the air route and (2) to open land communications with China. An amphibious operation at the same time was approved. TWILIGHT, the B-29 project, was also approved, and the Fourteenth Air Force, the Chinese Army, and the Chinese Air Force would be improved for intensified operations in and from China. The general concept of the SEXTANT decisions on the Pacific and Asia was that "the main effort against Japan should be made in the Pacific." What was attempted elsewhere in Asia would be in support of that main effort. There would be first priority for ANVIL and OVERLORD, the supreme operations for 1944.75
SEAC Tries To Salvage Burma Operations
Admiral Mountbatten was an aggressive commander, of proven desire to close with the enemy. Moreover, he and his subordinates, of whom Stilwell was one, were bound by the SEXTANT decision to clear north Burma. Lastly, fighting in the Arakan and in north Burma had been under way for weeks, with both sides reinforcing. BUCCANEER's demise left SEAC the alternatives of postponing an attempt at a major co-ordinated offensive for another year, which would probably mean the end of operations to clear north Burma, or of staging an amphibious operation smaller than BUCCANEER, with the hope that it would
still be enough to meet the Generalissimo's stipulation for such an operation in the Bay of Bengal, and so lead him to take active part in the Burma fighting.
Mountbatten's first reaction was hesitant, because the shipping requirement would be the same if the attempt was large or small, and because no worthwhile objective could be seized with what shipping was at hand. When an amphibious assault on the Arakan coast was first proposed, he did not see how it could be presented as that previously promised the Generalissimo or how it alone could fulfill SEAC's basic directive. However, since such would be a starting point for the future, would enable the long-range penetration groups to do their work, and would not commit him to an offensive in central Burma, he directed his staff to study it.76
Since the amphibious operation promised the Generalissimo had never been defined to him, and since his stipulation had been for a major one, if the SEAC planners could somehow evolve a major amphibious effort the question of Allied good faith would be answered, even if belatedly, and attention would be focused on the Generalissimo's reaction. By adjusting the delicate balances for a plan that might be imposing enough to satisfy the Generalissimo yet still fit within SEAC resources, SEAC's planners evolved PIGSTICK. PIGSTICK called for an assault on the Mayu peninsula aimed at Akyab. Two divisions plus two brigades would be used in a southward advance down the peninsula and one division in an amphibious assault aimed at surrounding and destroying not less than 20,000 Japanese. One more landing like PIGSTICK, perhaps in the Ramree-Cheduba area, could take staging areas that would put 15 Corps within reach of Rangoon.77 TARZAN was modified into GRIPFAST, an attack on north and central Burma with an airborne landing at Indaw on the Japanese line of communications to Myitkyina.
In the initial negotiations between Mountbatten and the Chinese on the commitment to battle of the U.S.-sponsored Chinese divisions in Yunnan (Y-Force) Stilwell entered enthusiastically. SEAC's new plan, thought Stilwell, was almost the same as TOREADOR (the airborne landing in central Burma), which had so appealed to the Generalissimo at SEXTANT.78 Mandalay itself was now the objective of SEAC's efforts, while the amphibious operations were enlarged.79
For whatever reasons, the Generalissimo was unimpressed with SEAC's attempt to meet his demands for an amphibious operation before he would move. Like a wary customer, he questioned the value of the substitute that SEAC was offering. Since even in the genial atmosphere of Cairo he had been conspicuously unwilling to commit himself, it was apparent that he would drive a hard bargain, particularly since the President's radio from Cairo had offered him an alibi. His final reply to the President's radio on 17 December
stressed his need for money and air power but implied that a large enough amphibious operation might even yet secure his co-operation.
My telegram of December 10th must have reached you by this time. I have discussed with General Stilwell the proposed change in the plan of campaign and have come to the following conclusions:
In case the original plan of concentrating warships and transports for landing troops cannot be completely carried out, it would be better to defer the amphibious all-out offensive till November next as you suggested so that the enemy in Burma may be annihilated once and for all. In the meantime preparations for an offensive against Burma next spring should proceed at full speed as originally planned, thus enabling us to launch an attack on land at any moment which is deemed favorable, or at any time before next autumn if a sufficient number of warships and transports can be concentrated to effect a grand scale landing on the enemy's flanks, without waiting till the autumn of next year.
In this way the Burma front might be liquidated sooner than one could anticipate. I have decided to accept your suggestion that the general offensive against Burma should be postponed to November next or sooner if the original amphibious operation could be launched. At the same time I cannot but reiterate that in the intervening period of one year during which there will be little hope of re-opening the Burma Road, the China theater of war will be in a most critical situation. I therefore earnestly ask you to do all in your power to accede to my request for financial assistance and for an increase of air force and air transportation as stated in my telegram of December 10th, in the hope that the danger to the China theater may be removed and the drawbacks in the strategy against Japan remedied in accordance with your consistent friendly policy of rendering assistance to China. Awaiting an early reply.
Doing his best to meet the Generalissimo's requirements, Mountbatten gave Stilwell for a further "talking point" information that the Chindit forces in the proposed Burma operations would total 20,000 men, approximately half of whom would be assisting the Chinese advance.81
In talks with the Generalissimo and Madame Chiang, Stilwell learned that the Chinese expected the United States to pay the entire cost of constructing the B-29 fields at Cheng-tu. The Generalissimo's request for a loan of one billion dollars gold, the Chinese insistence on setting an official exchange rate of 20 to 1 between their currency and the U.S. dollar when the black market rate was 240 to 1, and rising rapidly, and now the President's alleged promise to pay the whole cost of the B-29 fields introduced a new factor of importance, the sheer monetary cost of attempting operations in China.
The Generalissimo estimated that the Cheng-tu fields would cost two to three billion dollars of Chinese currency. "At 20 to 1, at least 100 million gold, of which one-half will be squeeze. Appalling," wrote Stilwell. Stilwell protested that his understanding was the United States would "help" with the project. No, retorted Madame Chiang, the President had promised to pay for everything. Disgusted by what to him seemed a naive softness, Stilwell wrote: "One more example of the stupid spirit of concession that proves to them that we are suckers. 'We'll put in VLR bombers' (no bargaining). Then, 'we'll pay for the
fields' (no bargaining). Same on air freight--promise without bargain. Same on equipment for army--promise without bargain. Same on Chinese Air Force. Same on 14th Air Force. Same on everything."82
When the discussion came around to the current operations in Burma, the Generalissimo's actions on 18 and 19 December baffled Stilwell. On the 18th the Generalissimo gave Stilwell full command of the Chinese forces in India and those now fighting in the Hukawng Valley. The next day he rejected Mountbatten's proposals for a major attack on Burma, which made Stilwell write: "[The Generalissimo] is afraid that even concerted attack by all available forces has only one chance in a hundred and yet he'll sit back and let a small force take on the Japs alone."83 With the Generalissimo's promise in hand, Stilwell prepared to leave to take command of the Hukawng Valley operations. He believed that with the Ledo Force there was just a chance he might be able to link with the Chinese Yunnan divisions somewhere near Myitkyina.84
Stilwell's decision to assume active command of the forces in north Burma is not discussed or analyzed in his private or official papers. In the light of his habit of analyzing every major step this circumstance suggests he thought the move an obvious one. By December 1943 the post of chief of staff to the Supreme Commander, China Theater, was simply a paper one, without staff, directives, or duties. The Chinese had never agreed to set up the Sino-American staff through which Stilwell was to have functioned as Joint Chief of Staff, China Theater. After the Three Demands crisis of June 1942 the Generalissimo had largely ignored both him and his suggestions. Therefore, Stilwell's post of chief of staff to the Generalissimo would not require his presence in China.
There was Stilwell's still-existing mission of improving the combat efficiency of the Chinese Army, but his superiors had not objected to his conclusion that because Chinese delay had wasted two years there was little more he could do, and were themselves coming to the very similar conclusion that little more should be attempted than that which Stilwell had already begun, and which his subordinates in China could carry out as a matter of routine.
Since October 1943 the only major development had been the SEXTANT Conference, which had so obviously compromised Stilwell's position in China that Marshall had asked him if he wanted to be recalled. Mountbatten, Stilwell's superior, was actively soliciting the Generalissimo's aid in Burma operations, thus relieving Stilwell of responsibility for that task.
There remained the operations in Burma, which had been under way since 30 October 1943. For two months the American officers of Chih Hui Pu had been trying to achieve a satisfactory solution, but without success. General Boatner, Stilwell's deputy in north Burma, who had been actively exercising field command, was now a victim of pneumonia.85 General Sun, who might
have commanded, had made it very plain that he wanted to retreat. The Chinese were now heavily engaged, and the situation had been described to Stilwell as critical. So Stilwell prepared to go to north Burma and assume command in the jungle. He was then sixty years of age.86
The conduct of American military-diplomatic relations with China was tacitly assumed by the President. In 1942 and 1943 Stilwell had presented many memorandums to the Generalissimo, to which the Chinese had rarely replied. In 1944, the President sent one message after another to the Generalissimo on military matters, and these the Generalissimo could not ignore. As will be seen, the role of CBI Theater headquarters in these exchanges was the humble and mechanical one of delivering the text of these presidential proddings to the Generalissimo.
A Changing U.S. Attitude
Once again in the history of the U.S. effort in China, Burma, and India, the issues were about to be placed before the President, this time by Stilwell at Madame Chiang's suggestion. Stilwell was not hopeful of the President's willingness to intercede, but he adopted the suggestion.87 Manifestly, Stilwell did not feel that the action of the President and the Prime Minister in reneging at Cairo on the long-promised amphibious operation made it unnecessary or inadvisable for the Generalissimo to take action in Burma or that it made ungraceful any criticism of the Generalissimo's reluctance from within those powers that had broken their pledges to him. So, Stilwell told Marshall that the SEAC plan was now virtually what CBI Theater had been urging all along, that if the President would exhort the Generalissimo to cross the Salween River when his allies attacked Burma, the Chinese leader might play his part.88 If the Generalissimo knew of this move, he could have reflected that his own message to the President two days before had accepted one of the two choices the President had offered, and that in the past the President had extended credits, lend-lease, and air support without asking anything in return.
Drafted by the War Department, the President's reply indicated that Roosevelt had moved away from the Generalissimo's and Chennault's views and was a great deal closer to Stilwell's. The President returned a qualified negative to the Generalissimo's requests. Describing himself as fully aware of the military and economic situation in China, the President said that the best the United States could do was to aid in the immediate opening of a land line of communications to China. The military actions involved in so doing would afford greater protection to the Hump air route. Roosevelt told the Generalissimo of Mountbatten's planning the largest possible operation to retake Burma and expressed his hope that the Generalissimo would do everything he
could to carry out the part reserved for China. Nothing whatever was said about postponing active operations until November 1944. Roosevelt discounted what could be done by more air power in China until the line of communications had been improved. The Chinese might find comfort in the President's assurance that plans to increase Hump capacity to 12,000 tons a month were well advanced, provided an advance by the Allied ground forces forestalled a Japanese attempt to interrupt the airline. The message closed with the brief comment that the Treasury Department was weighing China's request for a billion-dollar (U.S.) loan.89
Thus, the President was suggesting that China act and was stressing action on the ground rather than in the air. The Generalissimo had accepted one of the alternate courses offered by the President, waiting until November 1944 to advance into Burma, only to find that the President had quietly abandoned it. Did the Generalissimo's linking the cancellation of BUCCANEER with a request for one billion dollars anger the President? Whatever the reason, the changed tone and shifts in emphasis of the President's reply, the ever stronger and more demanding nature of its successors, suggest that the President had made up his mind about China. At Cairo Roosevelt had been uncertain and unable to guide Stilwell; after Cairo and a few weeks of consideration, the President was striking out along the line of insisting that China take the offensive in return for the lend-lease she had received.
Despite the President's urgings, the Generalissimo's reply was negative. It even had overtones of the sardonic. He agreed to leave the Ledo forces at Mountbatten's disposal but stated that the Y-Force would move only if the Allies took the Andaman Islands, Rangoon, or Moulmein. If they succeeded in taking Mandalay or Lashio, he would order his armies into Burma even if there was no amphibious operation.90
General Hearn, to whom Stilwell had entrusted the American share of negotiations with the Generalissimo, did not believe the Generalissimo's reply was final but thought rather that he was bargaining for a bigger amphibious operation or a pledge that the Burma campaign would definitely include capture of Rangoon. Nor did he believe the Generalissimo was aware of the size of the effort that Mountbatten might be able to make. If the Generalissimo agreed to commit Y-Force, 325,000 Allied combat troops would be involved in the Burma operation.91
Though urged by Hearn and Stilwell to accept the Generalissimo's Mandalay-Lashio offer, this was further than Mountbatten would go. Indeed, a certain asperity was entering his references to the Chinese. Asking that the
United States put pressure on the Generalissimo, Mountbatten remarked: "I do not see why we should continue to supply him with munitions if they are to be used solely for internal political purposes."92
Still determined on an offensive, Admiral Mountbatten went on with preparations for PIGSTICK, the assault on the Mayu peninsula. He told the British Chiefs of Staff that while PIGSTICK was within SEAC's capabilities, "if any further resources are taken from me . . . I shall have to cancel the operation."93
That the British Chiefs of Staff did not favor PIGSTICK became apparent when they suggested to the CCS that if PIGSTICK was canceled three fast LST's (landing ship, tank) and other landing craft could be released for a landing at Anzio, Italy. After examining the landing craft situation in the Mediterranean and considering the old promise to the Generalissimo to make an amphibious operation, the Joint Chiefs of Staff urged that plans and preparations for PIGSTICK continue with no further withdrawal of landing craft from SEAC. Moreover, Stilwell's initial attempts to advance in north Burma were meeting with success and an operation to the south would divert some Japanese from him.
While these discussions between the Joint and British Chiefs of Staff were under way, the British Chiefs of Staff told Mountbatten that they did not think PIGSTICK could be carried out, and, although there was still no decision by the CCS, ordered him to return the landing craft in question to the Mediterranean. The departure of the craft, together with the warning by his commanders in chief that they could not carry out PIGSTICK during the favorable weather period of February 1944 unless it was ordered by 30 December at the latest, forced Mountbatten to cancel the operation without awaiting CCS approval.94
Admiral Mountbatten canceled PIGSTICK with reluctance, for the action meant to him the probability of no worthwhile offensive against the Japanese for at least a year after SEAC's formation and would have an adverse effect on morale. In a last attempt at an amphibious operation, Mountbatten ordered preparations for BULLDOZER, a much smaller amphibious operation in the Arakan. A message from Mr. Churchill to "mark time for a day or two till we get matters cleaned up" was enough to end it, for even a day's delay would affect the time to mount it before rough weather began in the Bay of Bengal. Thus, the last hope of meeting the Generalissimo's demand for an amphibious operation was gone.
These events were enough to dampen even the buoyant Mountbatten's enthusiasm for a Burma campaign. Where a week before he had said: "I have no intention of allowing operations in Northern Burma to fade on account of abandonment of proposed operations elsewhere," he now told his staff: "The
quickest and most efficient way of taking supplies on a large scale into China is through a port rather than by a long and uncertain land route."95
Reflecting the strategic developments of SEXTANT and the Generalissimo's reluctance to engage in Burma operations, the Strategy and Policy Group, OPD, on 8 January 1944 submitted its comments on the "future military value of China Theater." The planners stated that since the main effort in the Pacific would be made in the central and southwest areas of that great expanse, the mission of Stilwell's CBI Theater should be to give air support to the main effort. The bases from which this support was to come should be in areas already secure, because to acquire any more territory would require of the Chinese Army an efficiency not likely to be attained before 1946-47. No further effort should be made, the paper went on, to equip Chinese ground forces beyond enabling them to control areas they already had. Therefore, all available Hump airlift capacity should be devoted to building up air power in China, which was believed to be the best way of preventing China's collapse, as well as of aiding Pacific operations. Offensive operations in Burma to thwart a Japanese threat to the existing India-China air line of communications were still thought necessary.96
Before the SEXTANT Conference, the United States placed great emphasis on major operations in Southeast Asia to break the blockade of China and divert Japanese strength from the Southwest Pacific. President Roosevelt had been most interested in the implications of this policy as it applied to Asia. At SEXTANT his attitude changed; the amphibious operation demanded by the Generalissimo as the price of his co-operation in Burma was canceled, and for a time it seemed the President was willing to postpone Burma operations until November 1944. The Generalissimo asked for a billion U.S. dollars and heavy air reinforcements so that China might withstand another year's blockade. He was not willing himself to make a major effort to break it. The President's reply was drafted by the War Department and moved toward full support of Stilwell. During these discussions, the British Chiefs of Staff withdrew certain essential landing craft from Mountbatten, in effect ending his hopes of a major amphibious operation. December ended with Stilwell taking his post in north Burma to command the now heavily engaged Chinese New First Army, with the President urging China to play a more active part in the war, and with OPD suggesting that the mission of CBI Theater should be to give air support to Allied operations in the Pacific.
Table of Contents ** Previous Chapter (1) * Next Chapter (3)
1. (1) Ltr, Auchinleck to Stilwell, 7 Sep 43; Quotation from Ltr, Stilwell to Auchinleck, 16 Sep 43. Item 226, Bk 3, JWS Personal File. (2) Mountbatten Report, Pt. A, par. 7.
2. Extract, SAC's Personal Diary, 30 Oct 43. SEAC War Diary.
3. Stilwell's plan has not been found. Probably it is reflected in the views expressed in the memorandum cited in note 7, below.
4. (1) SEAC Plan, SAC (43) 2, 28 Oct 43; Min, SAC's Mtg, 31 Oct 43; Rad 7, SEACOS to COS, 31 Oct 43. SEAC War Diary. (2) Rad AMMDEL 1963, Merrill to Stilwell, 3 Nov 43. Item 1162, Bk 4, JWS Personal File.
5. Stilwell Diary, 3 Nov 43. (See Bibliographical Note.)
6. Actually present were the 5th and 7th Indian Divisions with three brigades each, and the 81st West African Division with two brigades. Mountbatten Report, Pt. B, pars. 34-37.
7. Memo, Stilwell for SEAC, 27 Oct 43. SNF 215.
8. (1) Rad 22, Wedemeyer to Marshall for Maj Gen Thomas T. Handy, 6 Nov 43; Rad SEACOS 83834, Mountbatten to COS, 10 Nov 43; Extract, SAC's Personal Diary, 7 Nov 43. SEAC War Diary. (2) Rad AMMDEL 2008, Merrill to Stilwell, 8 Nov 43; Rad AMMDEL 2023, Merrill to Stilwell, 10 Nov 43; Rad AMMDEL 2036, Merrill to Stilwell 11 Nov 43. Items 1203, 1225, 1236, Bk 4, JWS Personal File.
9. (1) Msg, Roosevelt to Chiang, 30 Jun 43; Ltr, Soong to Hopkins, 21 Jul 43. Bk IX, Hopkins Papers. (2) Memo, Hearn for Generalissimo, 1 Nov 43. Item 1139, Bk 4, JWS Personal File. (3) Incl to Memo, Somervell to Hopkins, 5 Nov 43. Bk VII, Hopkins Papers. (4) Robert E. Sherwood, Roosevelt and Hopkins, MS, IX-2-93. This manuscript quotes the President as aware of China's weakness but as desiring to be friendly with her 400,000,000 people and so wanting China to sign the Four Power Declaration.
10. Min, JCS 97th Mtg, 20 Jul 43.
11. Compilation of Background Material for SEXTANT, Table 4g, prep by Strategy and Policy Gp, OPD. ABC 337 (18 Oct 43) Sec 5, A48-224.
12. CPS 86/2, 25 Oct 43, sub: Defeat of Japan Within Twelve Months After Defeat of Germany.
13. Memo, Col Joseph J. Billo, Chief, Strategy Sec OPD, for Chief, Strategy and Policy Gp OPD, 4 Nov 43, sub: Reanalysis of Our Strategic Position in Asia. ABC 337 (18 Oct 43) Sec 5, A48-224.
14. JCS 533/5, 8 Nov 43, sub: Recommended Line of Action at Next U.S.-British Stf Conf.
15. Stilwell's Mission to China, Chs. II and X.
16. (1) Stilwell's Mission to China, Ch. X. (2) The Stilwell Papers, pp. 237-38. (3) In his Black Book, 6 November 1943, Stilwell wrote: "Is this real cooperation, or am I going goofy? . . . The catch is probably that he's willing but the blocking backs in the War Ministry will throw us for a loss. But just now, we are all honey and sweetness."
17. Memo, Stilwell for Generalissimo, 5 Nov 43. Stilwell Documents, Hoover Library.
18. The Stilwell Papers, pp. 237-38.
19. Stilwell Documents, Hoover Library.
20. That the Generalissimo returned a written answer is implied in The Stilwell Papers, p. 240. However, the authors have not been able to find it.
21. The Stilwell Papers, p. 236.
22. Ibid., p. 238.
23. Notes, Conf, NMC, 11 Nov 43. Marginal notes show action directed for Americans. Stilwell Documents, Hoover Library.
24. The Stilwell diaries of this period have several appreciative comments on General Hurley. Hurley's recollections of his first meetings with General Stilwell were given to the authors. Intervs with Gen Hurley, Jan 49, Feb 50.
25. The Stilwell Papers, pp. 238-40.
26. (1) These jottings are from one of Stilwell's notebooks of the type in which he kept his diary. This one is labeled Data, and is hereafter cited as Data Notebook. At the top of the page on which these entries begin, Stilwell wrote "GCM" in bold letters. (2) Stilwell's fears about Mountbatten's attempts to whittle away his authority in India and China are also expressed in Rad AGWAR 863, Stilwell to Marshall, 11 Nov 43. Item 1234, Bk 4, JWS Personal File.
27. The Stilwell Papers, p. 245.
28. (1) See pp. 57-58, above. (2) Min, JCS 128th Mtg, 23 Nov 43, Item 2. (3) CCS 405, 22 Nov 43, sub: Role of China in Defeat of Japan. (4) Japanese Study 89.
29. (1) SEAC War Diary, 23 Nov 43. (2) Min, SEXTANT Conf, First Plenary Mtg, Villa Kirk, 23 Nov 43. (3) Henry H. Arnold, Global Mission (New York: Harper & Brothers, 1949), p. 460.
30. (1) Min, CCS 129th Mtg, 24 Nov 43, Item 5. (2) CCS Info Memo 166, 18 Dec 43.
31. (1) Min cited n. 29(2). (2) Min, CCS 129th Mtg, 24 Nov 43, Item 7.
32. (1) The Stilwell Papers, p. 246. (2) SEAC War Diary, 24 Nov 43.
33. SEAC War Diary, 24 Nov 43. (2) Min, CCS 130th Mtg, 25 Nov 43, Item 1.
34. Data Notebook. A little earlier in the Data Notebook, in the first version of his notes for the conference with the President, Stilwell put it: "FDR. Recommendations. Private army of one corps. Keep X-Force and add one in China. Recommend to PEANUT more power for me. Field Chief of Staff. Oust running dog. FDR. My mission complicated by not knowing what direct messages [from Roosevelt to Chiang] contain No bargaining power. (TWILIGHT)."
35. Data Notebook.
36. (1) Handwritten pages headed Story of J. Peene, Sr. (Hereafter, Story of J. Peene, Sr.) Stilwell Documents, Hoover Library. (2) Joseph Peene was General Stilwell's maternal grandfather. Mr. Peene was famous in family tradition for paying his employees in gold pieces. Because the Generalissimo later asked the President for $1,000,000,000 gold the event may have reminded the sometimes waggish Stilwell of this episode from the days of the gold standard. See letter, Mrs. Winifred A. Stilwell to Sunderland, 4 August 1952. OCMH. (3) The Stilwell Papers, p. 246.
37. The Stilwell Papers, p. 246.
38. (1) Story of J. Peene, Sr. The exact wording is: "What shall we give the Chinese? Equip. for 90 XX [divisions]. But the American Corps is out, and we give them Japan. What a laugh for the Japs." (2) Min, JCS 130th Mtg, 25 Nov 43, Item 6.
39. Data Notebook.
41. (1) Extracts, SAC's Personal Diary, 25, 26, 27 Nov 43; quotation from Extract, 27 Nov 43. SEAC War Diary. (2) The Stilwell Papers, p. 246.
42. (1) SEAC War Diary, 27 Nov 43. (2) Stilwell Diary, 27 Nov 43.
43. SEAC War Diary, 27 Nov 43.
44. Stilwell Diary, 27 Nov 43.
45. SEAC War Diary, 21 Nov 43.
46. (1) Min, JCS 131st Mtg, 26 Nov 43, Item 3. (2) Min, Mtg of President and JCS, American Legation, 28 Nov 43. (3) Min, CCS 131st Mtg, 26 Nov 43, Item 4. (4) Min, CCS 132d Mtg, 30 Nov 43.
47. U.S. Department of State, United States Relations With China (Washington, 1949) p. 519.
48. Address, Generalissimo to Chinese New First Army in India, 30 Nov 43. JWS Misc Papers, 1943. (See Bibliographical Note.)
49. (1) Robert E. Sherwood, Roosevelt and Hopkins: An Intimate History (New York: Harper & Brothers, 1948) pp. 778-79. (2) CM-IN 1946, Ambassador W. Averell Harriman to Marshall, 4 Nov 43.
50. Sherwood, Roosevelt and Hopkins, pp. 788, 798-99.
51. Min, SEXTANT Conf, Third Plenary Mtg, Villa Kirk, 4 Dec 43.
52. The original statement was made by Soviet Foreign Minister Vyacheslav M. Molotov to Cordell Hull at Moscow and confirmed shortly after by Molotov to Harriman, who promptly relayed it to General Marshall. (1) Cordell Hull, The Memoirs of Cordell Hull (New York: The Macmillan Company, 1948) II, 1309. (2) Rad cited n. 49(2).
53. (1) Story of J. Peene, Sr. (2) Ltr, Hill to Ward, 2 Sep 52. OCMH.
54. (1) Min cited n. 51. (2) Min, CCS 134th Mtg, 4 Dec 43, Item 4.
55. Min, CCS 135th Mtg, 5 Dec 43.
56. Min, SEXTANT Conf, Fifth Plenary Mtg, Villa Kirk, 5 Dec 43.
57. Rad SEACOS 38, 6 Dec 43. Min, SEXTANT Conf, p. 312.
58. Min, CCS 136th Mtg, 5 Dec 43, Item 1.
59. (1) William D. Leahy, I Was There: The Personal Story of the Chief of Staff to Presidents Roosevelt and Truman Based on His Notes and Diaries Made at the Time (New York: Whittlesey House, McGraw Hill Book Company, Inc., 1950) pp. 213-14. (2) Ernest J. King and Walter Muir Whitehill, Fleet Admiral King: A Naval Record (New York: W. W. Norton & Company, Inc., 1952), p. 525. (3) Sherwood, Roosevelt and Hopkins, p. 800.
60. (1) Sherwood, Roosevelt and Hopkins, pp. 800-801; quotation, p. 801. (2) Arnold, Global Mission, p. 473.
61. See Story of J. Peene, Sr., atchd illustration.
62. The Stilwell Papers, page 251, has the text of the conversation.
63. (1) Story of J. Peene, Sr. (2) Elsewhere, Stilwell gives his impression of the President's wishes as: "Policy: 'We want to help China.'--Period." Stilwell Undated Paper (SUP) 65. (See Bibliographical Note.)
64. Story of J. Peene, Sr.
65. CM-IN 4651, Stilwell to Marshall, 7 Mar 44.
66. The Stilwell Papers, p. 252, quoting Roosevelt.
67. Interv with Marshall, 6 Jul 49.
68. Stilwell Diary, 7 Dec 43.
69. Stilwell Diary, 10 Dec 43.
70. Memo, Handy for Marshall, 31 Dec 43, sub: Equipping Chinese Divs, sent as CM-OUT 11706, Marshall to Sultan for Stilwell, 31 Dec 43.
71. Ltr, Lt Gen Frederick E. Morgan, COSSAC, to Secy, COS, 6 Jan 44. COSSAC (44) 5, AFHQ G-3 File, OCMH. Also published as CCS 446/1, 8 Jan 44, sub: Three Div Lift for ANVIL. ABC 384 (Europe) 1 Mar 43, Sec 2A, A48-224.
72. Min, CCS 138th Mtg, 7 Dec 43.
73. Rad AMSME 1720, Stilwell to Hearn, 7 Dec 43. Item 1502, Bk 5, JWS Personal File.
74. Rad AGWAR 919, Chiang to Roosevelt, 9 Dec 43. Item 1505A, Bk 5, JWS Personal File.
75. (1) CCS 417, 2 Dec 43, sub: Plan for Defeat of Japan. (2) CCS 426/1, 6 Dec 43, sub: Rpt to President and Prime Minister. (3) CCS 397 (rev), 3 Dec 43, sub: Specific Opns for Defeat of Japan.
76. (1) See Ch. I, above. (2) Rad, CCS to Mountbatten, 5 Dec 43; Rad, Wedemeyer to Mountbatten, 6 Dec 43; Rad, Mountbatten to COS, 11 Dec 43. SEAC War Diary.
77. Rad, SEAC (RL) 19, 19 Dec 43. ABC 384 (Burma) 8-25-42, Sec IV, A48-224.
78. Stilwell Black Book, 19 Dec 43.
79. Memo, Stilwell for Generalissimo, 19 Dec 43. Item 1533, Bk 5, JWS Personal File.
80. Rad AGWAR 941, Chiang to Roosevelt. 17 Dec 43. Item 1529, Bk 5, JWS Personal File.
81. Rad COPIR 10, Mountbatten to Stilwell, 20 Dec 43. Item 1541, Bk 5, JWS Personal File.
82. Stilwell Black Book, 18 Dec 43.
83. (1) Stilwell Diary, 18 Dec 43. (2) Stilwell Black Book, 19 Dec 43. (3) Quotation from The Stilwell Papers, p. 265.
84. The Stilwell Papers, p. 266.
85. Stilwell Diary, 21 Dec 43.
86. The Stilwell Papers, p. 285. Stilwell's sixty-first birthday was on 20 March 1944.
87. (1) Stilwell Black Book, 19 Dec 43. (2) The Stilwell Papers, p. 263.
88. Rad AGWAR 947, Stilwell to Marshall, 19 Dec 43. Item 1537, Bk 5, JWS Personal File.
89. (1) Item 58, OPD Exec 10. (2) Rad WAR 4092, Roosevelt to Chiang, 20 Dec 43. Item 1546, Bk 5, JWS Personal File.
90. (1) Rad, Lt Gen Sir Adrian Carton de Wiart, Prime Minister's and SAC's Personal Representative to Chungking, to Mountbatten, 23 Dec 43. SEAC War Diary. (2) Rad AM 2934, Hearn to Merrill, 28 Dec 43; Rad AM 2372, Sultan to Stilwell, 30 Dec 43. Items 1571, 1587, Bk 5 JWS Personal File. (3) CM-IN 1161, Hearn to Marshall, Handy, and Maj Gen Joseph T. McNarney, 2 Jan 44.
91. CM-IN 14577, Hearn to Stilwell and Marshall, 23 Dec 43.
92. Rad SEACOS 53, Mountbatten to COS, 24 Dec 43. SEAC War Diary.
93. Rad SEACOS 54, Mountbatten to COS, 27 Dec 43; Min, SAC's 37th Mtg, 27 Dec 43. SEAC War Diary.
94. (1) CCS 452, 30 Dec 43, sub: Cancellation of Opn PIGSTICK. (2) CCS 452/2, 6 Jan 44, sub: Cancellation of Opn PIGSTICK. (3) Rad, COS to Mountbatten, 29 Dec 43; Rad, Mountbatten to CCS, 6 Jan 44. SEAC War Diary.
95. (1) Rad, Churchill to Mountbatten, 7 Jan 44; Extract, SAC's Personal Diary, 28 Dec 43; Quotation from Min, SAC's Fifth Stf Mtg, 6 Jan 44. SEAC War Diary. (2) JPS 346, 2 Jan 44, sub: Cancellation of Opn PIGSTICK. (3) Notes by Brig. Gen. Frank N. Roberts on draft manuscript of this chapter. OCMH.
96. Memo, Gen Roberts, Chief, Strategy and Policy Gp, OPD, for ACofS OPD, 8 Jan 44, sub: Future Mil Value of China Theater; Memo, Billo for Roberts, 13 Jan 44, sub: Future Mil Value of China Theater. OPD 201 (Wedemeyer, A. C.), A47-30. | 1 | 9 |
<urn:uuid:6d833a1b-3e16-4753-b07b-f88276d7203b> | Linux in a Windows World/Linux's Place in a Windows Network/Linux's Features
Linux can be an effective addition to a Windows network for several reasons, most of which boil down to cost. Windows has achieved dominance, in part, by being less expensive than competitors from the 1990s, but today Linux can be less expensive to own and operate. This is particularly true if you're running Windows NT 4.0, which has reached end-of-life and is no longer supported. (Windows 2000 will soon fall into this category, as well.) For these old versions of Windows, you're faced with the prospect of paying to upgrade to a newer version of Windows or switch to another operating system. Linux can be that other OS, but you should know something about Linux's features and capabilities before you deploy it.
Effectively deploying Linux requires understanding the OS's capabilities and where it makes the most sense to use. This chapter begins with a look at the Linux roles that this book describes in subsequent chapters. The bulk of this chapter is devoted to an overview of Linux's capabilities and requirements when used as a server or as a desktop system. Because you may be considering replacing Windows systems with Linux, this chapter concludes with a comparison of Linux to Windows in these two roles.
Where Linux Fits in a Network
Most operating systems—and Linux is no exception to this rule—can be used in a variety of ways. You can run Linux (or Windows, or Mac OS, or most other common general-purposes OSs) on personal productivity desktop systems, on mail server computers, on routers, and so on. This book doesn't cover every possible use of Linux; instead, it focuses on how Linux interacts with Windows systems on a local area network (LAN) or how Linux can take over traditional Windows duties. This book will further focus on areas in which you can get the most "bang for the buck" by deploying Linux, either in addition to or instead of Windows systems. Chapter 2 covers Linux deployment strategies in greater detail, but, for now, consider Figure 1-1, which depicts a typical office network. Linux's mascot is a penguin (known as Tux), so Figure 1-1 uses penguin images to mark the areas of Linux deployment covered in this book.
Of course, Linux can be used in roles not shown in Figure 1-1. In fact, Linux can be an excellent choice for an OS for such roles as a web server; however, because such uses aren't LAN-centric or don't tie closely to Windows, this book doesn't cover them. You might want to begin with just one or two functions for Linux on your network, such as a file server or a Dynamic Host Configuration Protocol (DHCP) server. Some systems, such as backend database servers, may be so vital and data-intensive that replacing them with Linux systems, although possible, is a major undertaking that can't be adequately covered here.
Linux as a Server
Traditionally, Linux's strength has been as a server OS. Many businesses rely upon Linux to handle email, share files and printers, assign IP addresses, and so on. Linux provides a plethora of open source programs to handle each of these server tasks, and many more. Before you attempt to deploy a Linux server, though, you should understand Linux's strengths and weaknesses in this role, what type of hardware you're likely to need, and what types of software you'll need.
Linux Server Capabilities
As seen in Figure 1-1, Linux can be deployed in many different ways. Indeed, Figure 1-1 presents an incomplete picture because it focuses on only those roles described in this book. Linux firewalls, web servers, databases, and more are all available. Still, Linux has certain strengths and weaknesses as a server that you should understand as you plan where to use it. Linux's greatest strengths as a server include the following:
- Linux has earned a reputation as a very reliable OS, which, of course, is a critically important characteristic for servers.
- You can download Linux from the Internet at no cost (aside from connect charges), which can be important in keeping costs down. Of course, the up-front purchase price (or lack of it) is only part of the equation; support costs, hardware costs, and other factors can be much more important. Linux's total cost of ownership (TCO) is a matter of some debate, but most studies give Linux high marks in this area.
- License issues
- The Linux kernel is licensed under the GNU General Public License (GPL), and much of the rest of Linux uses the same license. Most other Linux programs use other open source licenses. The result is that you're not bound by restrictive commercial license terms; as a user or administrator, you can do anything with Linux that you can do with a commercial OS, and then some. If you want to redistribute changes to a program, though, some open source licenses impose restrictions, so you should check the license to see how you're permitted to distribute the changes. (Of course, most commercial OSs don't even let you see the source code!)
- Security issues
- Linux isn't vulnerable to the worms and viruses that plague the Internet today; almost all of these pests target Windows systems. Of course, a Linux server can still be inconvenienced by worms and viruses because it may need to process them in some way; a Linux mail server may still need to accept email with worms and perhaps then identify and delete the worm. Linux won't be infected by the worm, though—at least, not by any worm that's known to be spreading as I write.
- Server software selection
- As a Unix-like OS, Linux has inherited many popular Unix servers, such as sendmail and Samba. In fact, some of these, including Samba, were written using Linux as a primary target OS.
- Remote administration
- Linux provides several remote administration methods, ranging from remote logins using text-mode tools such as Secure Shell (SSH) or Telnet to tools designed for remote administration via web browsers, such as Webmin (http://www.webmin.com). Of course, remote administration isn't unique to Linux, but Linux presents more options than do most non-Unix OSs.
- Resource use
- With Linux, you have fine control over what programs you run, which enables you to trim a system of unnecessary items to help get the most out of your hardware. For instance, most servers don't need to run a local GUI, so Linux enables you to run a system without one, and even to omit the X files and programs from the hard disk.
- In addition to customizing the system to minimize resource use, you can modify Linux to achieve other ends. For instance, you can recompile the kernel to add or omit features that help the system operate as a router, or you can alter the startup sequence to accommodate special needs. Taken to the extreme, these features help those who run Linux on specialized embedded devices, but such uses are well beyond the scope of this book.
- Hardware flexibility
- Linux is available on a variety of hardware, ranging from specialized embedded versions of Linux to supercomputers. This book is designed to help those running Linux on fairly traditional small- to mid-sized servers and desktop systems using conventional Intel Architecture 32 (IA-32; a.k.a. x86) hardware or other hardware of comparable power. Even in this realm, Linux is very flexible; you can run it on AMD64, PowerPC (PPC), Alpha, and other CPUs, which lets you standardize your OS even if you happen to have different hardware platforms.
Most of these advantages are advantages of Unix-like OSs generally, and so apply to other OSs, such as Solaris and FreeBSD. Compared to such OSs, Linux's greatest strengths are its hardware flexibility, open source licensing, and low cost (although several other low-cost and open-source Unix-like OSs exist).
Of course, no good thing is without its problems, and Linux is no exception to this rule. Fortunately, Linux's problems are minor, particularly when the OS is used on a server:
- Administrative expertise requirements
- Linux requires more in the way of administrative expertise than do some alternatives. For most organizations, this factor ultimately boils down to one of the variables in TCO calculations: Linux administrators are likely to demand higher salaries than Windows administrators do. On the other hand, Linux's reliability, scalability, and other factors frequently more than compensate for this problem in the TCO equation.
- Security issues
- Although immunity to infection by common Windows worms and viruses is a Linux advantage, Linux has its own security drawbacks. Crackers frequently attempt to break into Linux servers, and they sometimes succeed. In theory, this should be difficult to do with a well-administered system, but neglecting a single package upgrade or making some other minor mistake can leave you vulnerable. Of course, Linux isn't alone in this drawback, but it's one to which you should be alert at all times.
The term hacker is used by the popular media to refer to computer miscreants—those who break into computers and otherwise wreak havoc. This term has an older and honorable meaning as referring to skilled and enthusiastic computer experts, and particularly programmers. Many of the people who wrote the Linux kernel and the software that runs on it consider themselves hackers in this positive sense. For this reason, I use an alternative term, cracker , to refer to computer criminals.
Overall, Linux's strengths as a server far outweigh its weaknesses. The OS's robustness and the number of server programs it runs are powerful arguments in its favor. Indeed, those are the reasons commercial Unix variants have traditionally run many important network services. Linux has been slowly eroding the commercial Unix market share, and its advantages can help you fill the gaps in a Windows-dominated network or even replace existing Windows servers.
Typical Linux Server Hardware
As noted earlier, one of Linux's strengths is that it runs on a very wide range of hardware. Of course, this isn't to say that you can use any hardware for any particular role; Linux won't turn a 10-year-old 80486 system with a 1-GB hard disk into a powerhouse capable of delivering files to thousands of users.
Linux most commonly runs on IA-32 hardware, and much Linux documentation, including this book, frequently presents IA-32 examples. IA-32 hardware is inexpensive, and it's the original and best-supported hardware platform for Linux. Still, other options are available, and some of these are well worth considering for a Linux server.
One of the problems with IA-32 is that it's a 32-bit platform. Among other things, this means that IA-32 CPUs are limited to addressing 232 bytes, or 4 GB, of RAM. (Intel Xeon processors provide a workaround that involves page swapping , or hiding parts of memory to keep the total available to the CPU at just 4 GB.) Although a 4-GB memory limit isn't a serious problem for many purposes, some high-powered servers—particularly those that support many user logins—need more RAM. For them, using a 64-bit CPU is desirable. Such CPUs can address 264, or 1.8x1019, bytes of RAM, at least in theory. (In practice, many impose lower limits at the moment, but those limits are still usually in the terabyte range.
Several 64-bit CPUs are available, including the DEC (now Compaq) Alpha, several AMD and Intel CPUs that use the AMD64 architecture, Intel's IA-64 Itanium, the IBM Power64 (the first of which is the PowerPC G5), and the SPARC-64. Of these, the Power64 and AMD64 platforms are likely to become more common in the next few years. With AMD and Intel both producing AMD64 CPUs, they are likely to take over the market dominated by IA-32 CPUs through most of the 1990s and early 2000s. Apple is rapidly shifting its Macintosh line to the Power64, and IBM and a few others are producing Power64-based servers. Of course, if you already have another type of 64-bit system, or you have an opportunity to get one at a good price, you can run Linux on it quite well. Linux support for the AMD64 and Power64 platforms is likely to be more mature than for other 64-bit platforms, though.
Of course, not all servers need 64-bit CPUs. For them, IA-32 CPUs, such as Intel's Pentium 4 or AMD's Athlon, are perfectly adequate. In fact, many systems can make do with much weaker CPUs. A DHCP server can run quite well on an old 80386, for instance. Just how much CPU power you need depends on the function of the server. Functions such as handling thin-client or other remote logins, converting PostScript into non-PostScript printer formats (particularly for multiple heavily used printers), and handling hundreds or thousands of clients, are likely to require lots of CPU power. Lighter duties, such as running a DHCP server, a local Domain Name System (DNS) server, or even a remote login server for a network with a dozen or so computers, requires much less in the way of CPU power. For such purposes, you can probably run Linux on a spare or retired computer. Even a system that's too weak to run a modern version of Windows can make a good small server.
Disk space requirements also vary with the server's intended role. Most obviously, a file server is likely to require lots of disk space. Precisely what "lots" is, though, depends on how many users you have and what types of files they store. Disk-intensive servers frequently use Small Computer Systems Interface (SCSI) hard disks rather than Advanced Technology Attachment (ATA) disks, because SCSI disks scale better in multidisk setups and because disk manufacturers often offer higher-performance disks in SCSI form only. SCSI disks cost more than do ATA disks, though. You'll have to judge for yourself whether your budget permits the use of SCSI disks. Recently, Serial ATA (SATA) disks have started to emerge as an alternative to traditional parallel ATA disks and SCSI. Depending on the drivers, SATA disks may appear to be SCSI disks in Linux, but they aren't.
Although servers can vary greatly in their major hardware components and needs, network connectivity is a common factor. All servers require good network links. On a LAN, this most commonly means 100- or 1000-Mbps (1-Gbps, or gigabit) Ethernet. Linux ships with excellent Ethernet support; chances are any Ethernet adapter will work. Modern motherboards frequently come with built-in Ethernet, too. Of course, not all Ethernet adapters are created equal: some are more stable, produce better throughput, or consume less CPU time. As a general rule, Ethernet adapters from major manufacturers, such as Intel, 3Com, and Linksys, are likely to perform best. No-name bargain-basement Ethernet cards will almost certainly work, but they may give more problems or perform less well under heavy network loads.
Most servers have similar video display capabilities. In this case, though, servers' needs are unusually light; because a server's primary duty is to deliver data over a network, a high-end graphics card is not a requirement. You might want something that's at least minimally supported by Linux (or by a Linux X server, to be precise) so you can administer the computer at the console using GUI administration tools; however, this isn't a requirement.
Typical Linux Server Software
When deciding how to deploy Linux on a LAN, you must consider what hardware and software to use. Linux isn't a monolithic beast you can decide to install and be done with; you must make choices about your Linux installation. These choices begin with your decision about a Linux distribution —a collection of software and configuration files that's bundled together with an installation program. Some distributions are better suited than others to use on a server, although with enough extra effort, you can use just about any Linux distribution on a server computer. Beyond the distribution, you must pick individual server programs. These choices are very specific for the purpose of the computer. For instance, if you run a mail server computer, you need to decide which mail server program to run, and this decision can have important consequences for everything else you do on the computer. Such a decision is likely to be relatively unimportant on other types of server computers.
The term server can have multiple meanings; it can refer to either an individual program that delivers network services or to the computer on which that program runs. (A similar dual meaning applies to the word client on the other end of the connection.) In most cases, the meaning is obvious from the context, but when necessary, I clarify by explicitly specifying a server computer or program. Some people use the term service to refer to server programs or to the features that they provide.
Picking a distribution for server use
Your choice of distribution depends partly on your choice of hardware platform. Some distributions, such as Debian GNU/Linux, are available on a wide range of CPU architectures, whereas others, such as Slackware Linux, are available for just one CPU. If you're already familiar with a distribution, and you want to use it for your server, you may want to plan your hardware purchases around this fact. If you already have the hardware, though, or if you're constrained to use a particular platform for policy or budget reasons, you may need to narrow the range of your hardware choices. Broadly speaking, the IA-32 platform has the most choices for distributions, although a few distributions run only on other platforms. The most popular Linux distributions used on servers include the following:
- This distribution, headquartered at http://freshmeat.net/projects/centos/, is a community-based fork of Red Hat's Enterprise Linux. As such, it's technically very similar to Red Hat, but support details are quite different.
- This distribution is one of the few completely noncommercial distributions; it's maintained entirely by volunteers. It uses the Debian package format and is well-respected for its stable main ("stable") branch. This branch is on a very long release cycle, though, so it sometimes lags when major new versions of component packages are released. (Bug-fix and security updates are prompt, however.) Debian's "unstable" branch is much more up to date, but it's not as well-tested as the reliable main "stable" branch. Keeping up to date is fairly simple because of Debian's Advanced Package Tools (APT) package, which enables software updates over the Internet by typing a couple of commands. Because Debian doesn't sell official packages with support, obtaining outside support requires you to hire an independent consultant. To configure Debian, you normally edit text-mode configuration files in a text editor rather than use a GUI configuration tool. Overall, Debian is a good choice for servers, which usually must be stable above all else. Debian is available for an unusually wide range of CPUs, including IA-32, SPARC, PowerPC, Alpha, IA-64, and several other platforms. To learn more, check Debian's web site, http://www.debian.org.
- Fedora Core
- This distribution is the freely redistributable version of Red Hat. Its development cycle is faster than that of the official Red Hat releases, and part of Fedora's purpose is to serve as a test bed for new packages that will eventually work their way into Red Hat. Fedora can be a good choice if you like Red Hat, don't have a lot of money to spend on a commercial distribution, and don't mind doing without the official Red Hat support. Fedora Core is available for IA-32 and AMD64 CPUs; you can find it at http://fedora.redhat.com.
- Like Debian, Gentoo is maintained by volunteers. This distribution emphasizes building packages from source code; its package manager, known as portage, enables you to type a one-line command that downloads the source code and patches, compiles the software, and installs it. (This system is similar to the ports system of FreeBSD.) Portage can be a good way to tweak compiler settings for your CPU, installed libraries, and so on, but the time spent compiling packages can be a drawback. Also, if you maintain many systems with differing hardware, you may have trouble cloning a system, because the optimizations used on your original system may not work on other systems. One advantage of Gentoo is that it's easy to keep up to date with the latest packages, but you can make your system unstable if the latest version of an important package breaks other programs. Like Debian, Gentoo eschews GUI configuration tools in favor of raw text-mode configuration file editing. Gentoo is available for IA-32, AMD64, SPARC, and PowerPC CPUs. You can learn more at http://www.gentoo.org.
- Red Hat
- Red Hat is probably the most popular distribution in North America, particularly if you include its Fedora variant. This distribution originated the RPM Package Manager (RPM) package format. Even many programs that don't ship with Red Hat are available in IA-32 and source RPM packages, which makes software installation easy. Many third-party programs are released with Red Hat in mind, making Red Hat a safe bet for running such programs. With the release of Fedora 1, Red Hat has been focused its main product line (Red Hat Enterprise) on the business market, especially servers, although it can certainly be used on desktops. The main selling point of Red Hat Enterprise is its support, including system updates and maintenance via the Red Hat Network. These include GUI update tools that can grab updates over the Internet. Red Hat ships with a number of Red Hat-specific GUI configuration tools, but they're often limited enough that you'll need to bypass them and edit configuration files by hand. IA-32 is the primary Red Hat platform, although some versions are available for AMD64, Itanium, and certain IBM servers. Some older versions ran on SPARC and Alpha CPUs, but these versions are very outdated. The official Red Hat web site is http://www.redhat.com.
- Slackware is the oldest of the common Linux distributions. It uses tarballs as its package distribution format and eschews GUI configuration tools. Slackware ships with fewer packages than most Linux distributions; to install more exotic programs, you may need to compile them from source. Slackware may be a good choice if you're used to "old-style" Unix system administration, but if you're relatively new to Linux, you may want to pick something else. Slackware is available only on the IA-32 platform. You can learn more at http://www.slackware.com.
- This distribution is RPM-based but isn't derived from Red Hat directly. It's a good choice for both server and desktop use, and it includes a GUI administration tool called YaST. You can update packages over the Internet with this tool, as well as administer the local system. SuSE is primarily an IA-32 and AMD64 distribution, although an older PowerPC release is also available. Check http://www.suse.com for more information. SuSE was originally an independent German company, but it was purchased by Novell in early 2004.
Ultimately, any of these distributions (or various other less popular ones) can be configured to work equally well, assuming you're using IA-32 hardware. You may be better able to get along with a particular distribution depending on your system administration style and particular needs, though. If you like to tweak your system to get the very best possible performance, Gentoo's local-build approach may be appealing. If you like GUI configuration tools for handling the simple tasks, Red Hat or SuSE should work well. For an extremely stable system, Debian is hard to beat. If your hardware is old, consider Debian or Slackware, which tend to install less extraneous software than the others. If you want to use the most popular distribution, look at Red Hat. If the package management tool is important, pick a distribution that uses the tool you like.
Although the Linux kernel and most Linux tools are open source, some distributions include small amounts of proprietary software. This inclusion makes redistribution of the OS illegal. For this reason, you should check license terms before doing so or before installing it on many systems. Most distributions are freely redistributable, although most of them are available for sale. Buying a full package helps provide financial support to the distributions, which helps to advance future development. (Debian and Gentoo are exceptions; no commercial packages of these distributions are available, although you can buy them on cut-rate CDs that provide little or no profit to the actual developers. CentOS and Fedora have no for-sale versions, either, unless you count Red Hat in that role.)
Distribution choice is a highly personal matter. What works well for one administrator may be a poor choice for another. You may also run into policy issues at your workplace; for instance, you may need to buy a package from a vendor that offers certain support terms, which might rule out some distributions. If you haven't used Linux extensively in the past, you should study the options, focusing on distributions that provide GUI configuration tools you can use to get the basics up and running quickly. For more experienced administrators, I recommend whatever you've used in the past, provided you found it satisfactory. If you've used other Unix-like OSs but not Linux, try to find a Linux distribution with an administrative style similar to what you've used before. You should also try to minimize the number of Linux distributions you use, ideally to just one; this helps simplify system administration because you'll have just one set of distribution-specific tools and packages to learn, and you can retrieve package updates from just one source.
Picking individual server programs
Once you've picked and installed a Linux distribution, you'll need to decide what server programs to use. For most major server classes, most distributions provide a single default choice, such as the sendmail mail server. It's usually easiest to stick with the default, but if the server in question is the primary function of the computer, and if the default choice isn't what you'd like to use, you can change it.
Sometimes the alternative programs work with the same protocol as the default; for instance, the Postfix, Exim, and qmail servers are all popular alternatives to sendmail. All are implementations of the Simple Mail Transfer Protocol (SMTP), and you can replace one package with another without changing the server computer's interactions with other computers. (You usually need to implement minor or major changes to the server system's local configuration, though, and perhaps replace some support programs.) Other protocols for which multiple server program implementations are available in Linux include pull email using the Post Office Protocol (POP) or Internet Message Access Protocol (IMAP), the File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP; a.k.a. web server protocol), the Kerberos authentication protocol, the Remote Frame Buffer (RFB) remote GUI login protocol, and the DNS protocol. This book covers many, but not all, of these protocols. In some cases, alternative servers are configured in similar ways, but sometimes configuration is very different for the various server programs.
Other times, you may need to choose between two incompatible protocols that accomplish similar tasks. For instance, POP and IMAP are two different ways to deliver email received by an SMTP server to client systems (that is, users running mail programs such as Outlook Express or KMail). Other examples include RFB and X11R6 for remote GUI access; Telnet and SSH for remote text-mode access; the Server Message Block/Common Internet File System (SMB/CIFS), Network File System (NFS), and AppleShare for remote file-sharing protocols; SMB/CIFS, AppleShare, Line Printer Daemon (LPD), and Common Unix Printing System (CUPS) for printer sharing; SSH and FTP for remote file transfers; and NetBIOS domain logins, Lightweight Directory Access Protocol (LDAP), Kerberos, and Network Information Services (NIS) for remote authentication. In all these cases, the competing protocols have their own advantages and disadvantages. For instance, SSH provides encryption whereas FTP and Telnet do not. Some protocols are closely associated with particular OSs; for instance, SMB/CIFS and NetBIOS most commonly are used on Windows-dominated networks, whereas NFS is used in Unix-to-Unix file sharing and AppleShare is used most often with Mac OS systems. This book covers many of these protocols, but it focuses on those that are used most commonly on Windows-dominated networks, or at least that have potential to enhance such networks.
Chapter 2 describes in greater detail when you're likely to deploy each server covered in this book, and subsequent chapters describe the protocols and servers themselves.
Linux on the Desktop
Although it's only one system in Figure 1-1 (or two, if you count the thin client), Linux use as a desktop OS is different enough from Linux server use that it requires its own description. Several classes of differences are particularly noteworthy.
- User interfaces
- Generally speaking, desktop systems require better user interface devices (video cards, monitors, keyboard, and mice) than do servers. Linux usually works well with the same hardware as Windows systems, but with one caveat: the very latest video cards sometimes aren't well supported in Linux. Staying a generation or two behind the leading edge is therefore desirable in Linux.
- Disk and network hardware
- Many classes of servers require the very best in disk and network hardware, but this is less often the case for desktop uses. You can often get by with average ATA devices and typical Ethernet (or other network) hardware. Some desktop systems, though, do need excellent disk or network hardware. These are typically high-performance systems that run scientific simulations, specialized engineering software, and so on.
- Desktop systems' needs for powerful CPUs and lots of RAM vary with the application. Generally speaking, modern GUI environments are RAM-hungry, so you should equip a modern desktop system with at least 256 MB of RAM, and probably 512 MB or even 1 GB if possible. Linux does support slimmer environments that can work well in 128 MB or less if necessary, though. Most desktop applications don't really need powerful 64-bit CPUs, but some programs are written inefficiently enough that a fast CPU is desirable. Also, certain applications are CPU-intensive.
- Peripheral hardware
- One of Linux's weakest hardware points as a desktop system is its degree of support for peripheral hardware that's common on desktop systems but less common on servers, such as scanners, digital cameras, video input cards, external hard drives, and so on. Drivers for all major classes of hardware exist, but many specific devices are unsupported. If you're buying or building a new system, including such peripherals, you can easily work around this problem by doing a bit of research and buying only compatible devices. If you want to convert existing systems to Linux, though, existing incompatible hardware can drive up the conversion cost.
- Linux distributions
- The distributions outlined earlier, in Section 188.8.131.52, can all function as desktop distributions. Others, such as Mandrake and Xandros, are geared more toward desktop use.
- Configuration and administration
- Configuring and administering a desktop Linux system is much like handling a server system, but certain details do differ, mostly related to the specific software used to support each role. You might not even install an SMTP mail server on a desktop system, for instance; instead, you might install the OpenOffice.org office suite. The kernel, the basic startup procedures, and so on are likely to be similar for both types of system.
The terms desktop and workstation have similar meanings in the computer world; both refer to systems that are used by end users to accomplish real-world use. Typically, workstation refers to slightly more powerful computers, to those used for scientific or engineering functions as opposed to office productivity, to systems running Unix or Unix-like OSs as opposed to Windows, or to those with better network connections. The exact word use differs from one author to another, though. I use the two words interchangeably, but I use desktop most frequently.
Traditionally, Linux hasn't been a major player in the workstation arena; however, it does have all the basic features needed to be used in this way. Over the past few years, Linux's user interface has been improving rapidly, in large part because of the K Desktop Environment (KDE; http://www.kde.org) and the GNU Network Object Model Environment (GNOME; http://www.gnome.org). These are two desktop environments for Linux that provide a GUI desktop metaphor familiar to users of Windows, Mac OS, OS/2, and other GUI-oriented OSs. These environments rest atop the X Window System (or X for short) that provides low-level GUI tools such as support for opening windows and displaying text. Finally, tools such as office suites (OpenOffice.org, KOffice, GNOME Office, and so on), GUI mail readers, and web browsers make Linux a productive desktop OS. All these tools, but particularly desktop environments and office suites, have advanced substantially over the past few years, and today Linux is roughly as easy to use as Windows, although Linux is less familiar to the average office worker.
Many people think of Linux as a way to save money over using a commercial OS. Although Linux can indeed help you save money in the long term, you shouldn't blindly believe that Linux will do so, particularly in the short term. Costs in the switch, such as staff time installing Linux on dozens or hundreds of computers, retraining, replacing hardware for which no Linux drivers exist, and converting existing documents to new file formats, can create a net short-term cost to switching to Linux. In the long term, Linux may save money in license fees and easier long-term administration, but sometimes Linux's limitations can put a drag on these advantages. You'll need to evaluate Linux with an eye to how you intend to use it on your network.
Appendix B describes in more detail some of the issues involved in using Linux on the desktop.
Comparing Linux and Windows Features
When deploying Linux, you must consider the overall feature sets of both Linux and its potential competitors. In an environment that's dominated by Windows, the most relevant comparison is often to Windows, so that comparison will be described in the rest of this chapter.
Linux shares many of its strengths with other Unix-like OSs, and particularly with other open source Unix-like OSs, such as FreeBSD. Linux is probably the most popular and fastest-growing of these OSs because of its dynamic community and large number of distributions. If you prefer to run, say, FreeBSD, you certainly may, and much of this book is applicable to such environments; however, this book does focus on Linux, and it doesn't always point out where FreeBSD or other Unix-like OSs fit into the picture.
Linux is a powerful operating system, but Microsoft's latest offerings (Windows 2003 and Windows XP) are also powerful. Important differences between the two OS families include the following:
- Linux itself is low-cost, and this fact can be a big plus; however, the cost of the software is likely to be a small factor in the overall cost of running a computer. The TCO of Linux versus Windows is a matter of some debate, but it's likely to be lower for Linux if experienced Linux or Unix administrators are already available to deal with the system.
- GUI orientation
- All versions of Windows are largely tied to their GUIs; administering a Windows box without its GUI is virtually impossible. This linkage can make picking up Windows administration a bit easier for those unfamiliar with text-mode configuration, but it imposes some overhead on the computer itself, and it restricts the ways in which the system can be administered. These limitations are particularly severe for servers, which may not need a flashy GUI to handle mail or deliver IP addresses, except insofar as the OS itself requires these features. Linux, by contrast, is not nearly so GUI-oriented. Many distributions do provide GUI tools, but bypassing those tools to deal with the underlying text-mode configuration files and tools is usually a simple matter, provided you know where those files and tools are located and how to handle them.
- Hardware requirements
- In part because of Windows' reliance on its GUI, it requires slightly more powerful hardware than does an equivalent Linux server. This factor isn't extremely dramatic, though; chances are you won't be able to replace a 3-GHz Pentium 4 Windows system with a 200-MHz Pentium Linux system and achieve similar performance. Linux also runs on an extremely broad range of hardware platforms—IA-32, AMD64, PowerPC, Sparc, and so on. On the other hand, in the IA-32 world, the vast majority of hardware comes with Windows drivers, whereas Linux driver support isn't quite as complete. Linux drivers are available for most, but not all, IA-32 hardware.
- Software choices
- Both Linux and Windows provide multiple choices for many server software categories, such as mail servers or FTP servers; however, those choices are different. The best choices depend on the server type and your specific needs. Much of this book focuses on servers that work very well for Linux and for which the Windows equivalents have problems of one sort or another—cost, reliability, flexibility, or something else.
- Windows client integration
- This issue is really one of server features. Many Windows server programs are designed around proprietary or semiproprietary Microsoft protocols, or provide extended features that can be accessed from Microsoft clients. For these functions, Linux servers must necessarily either play catch-up or use alternative protocols. For instance, the Samba server on Linux does not provide the full features of a Windows 2000 or 2003 Active Directory (AD) domain controller. Thus, if you want such features, you must run either the Windows server or find some other way to implement the features you want.
- File compatibility
- Because Linux doesn't run the popular Windows programs except under emulators, file format compatibility may be an issue. This can be a factor when you read your own existing files or exchange files with other sites (with clients, say). In the office field, OpenOffice.org provides very good, but not absolutely perfect, Microsoft Office document compatibility. Appendix B describe this issue in greater detail.
On the whole, Linux makes an excellent choice for many small, mid-sized, and even large servers that use open protocols. When the server uses proprietary protocols or Microsoft extensions, the situation may change. Linux can also be a good choice as a desktop OS, particularly if your organization isn't tied to proprietary Microsoft file formats.
Linux is a flexible OS that can be deployed in many places on an existing Windows network. Its most common use is as a server to supplement or replace Windows servers, but you can also run Linux as a workstation OS. When deploying Linux, you'll have to match the Linux software to the hardware by selecting an appropriate distribution for your CPU and for the role you intend Linux to play on the network. You then need to select the server programs or end user applications you wish to run. | 1 | 6 |
<urn:uuid:c0abc6a1-11e3-4aff-8b5e-67891f9e473b> | History of the graphical user interface
|History of computing|
|Timeline of computing|
The history of the graphical user interface, understood as the use of graphic icons and a pointing device to control a computer, covers a five-decade span of incremental refinements, built on some constant core principles. Several vendors have created their own windowing systems based on independent code, but with basic elements in common that define the WIMP "window, icon, menu, pointing device" paradigm.
Early dynamic information devices such as radar displays, where input devices were used for direct control of computer-created data, set the basis for later improvements of graphical interfaces. Some early cathode-ray-tube (CRT) screens used a lightpen, rather than a mouse, as the pointing device.
Augmentation of Human Intellect (NLS)
In the 1960s, Doug Engelbart's Augmentation of Human Intellect project at the Augmentation Research Center at SRI International in Menlo Park, California developed the oN-Line System (NLS). This computer incorporated a mouse-driven cursor and multiple windows used to work on hypertext. Engelbart had been inspired, in part, by the memex desk-based information machine suggested by Vannevar Bush in 1945.
Much of the early research was based on how young children learn. So, the design was based on the childlike primitives of hand-eye coordination, rather than use of command languages, user-defined macro procedures, or automated transformations of data as later used by adult professionals.
Engelbart's work directly led to the advances at Xerox PARC. Several people went from SRI to Xerox PARC in the early 1970s. In 1973, Xerox PARC developed the Alto personal computer. It had a bitmapped screen, and was the first computer to demonstrate the desktop metaphor and graphical user interface (GUI). It was not a commercial product, but several thousand units were built and were heavily used at PARC, as well as other XEROX offices, and at several universities for many years. The Alto greatly influenced the design of personal computers during the late 1970s and early 1980s, notably the Three Rivers PERQ, the Apple Lisa and Macintosh, and the first Sun workstations.
The GUI was first developed at Xerox PARC by Alan Kay, Larry Tesler, Dan Ingalls and a number of other researchers. It used windows, icons, and menus (including the first fixed drop-down menu) to support commands such as opening files, deleting files, moving files, etc. In 1974, work began at PARC on Gypsy, the first bitmap What-You-See-Is-What-You-Get (WYSIWYG) cut & paste editor. In 1975, Xerox engineers demonstrated a Graphical User Interface "including icons and the first use of pop-up menus".
In 1981 Xerox introduced a pioneering product, Star, incorporating many of PARC's innovations. Although not commercially successful, Star greatly influenced future developments, for example at Apple, Microsoft and Sun Microsystems.
Xerox Alto and Xerox Star
The Xerox Alto (and later Xerox Star ) was an early personal computer developed at Xerox PARC in 1973. It was the first computer to use the desktop metaphor and mouse-driven graphical user interface (GUI).
It was not a commercial product, but several thousand units were built and were heavily used at PARC, other Xerox facilities, at least one government facility and at several universities for many years. The Alto greatly influenced the design of some personal computers in the following decades, notably the Apple Macintosh and the first Sun workstations.
Apple Lisa and Macintosh (and later, the Apple IIgs)
Beginning in 1979, started by Steve Jobs and led by Jef Raskin, the Apple Lisa and Macintosh teams at Apple Computer (which included former members of the Xerox PARC group) continued to develop such ideas. The Macintosh, released in 1984, was the first commercially successful product to use a multi-panel window GUI. A desktop metaphor was used, in which files looked like pieces of paper. File directories looked like file folders. There were a set of desk accessories like a calculator, notepad, and alarm clock that the user could place around the screen as desired; and the user could delete files and folders by dragging them to a trash-can icon on the screen.
There is still some controversy over the amount of influence that Xerox's PARC work, as opposed to previous academic research, had on the GUIs of the Apple Lisa and Macintosh, but it is clear that the influence was extensive, because first versions of Lisa GUIs even lacked icons. These prototype GUIs are at least mouse-driven, but completely ignored the WIMP ( "window, icon, menu, pointing device") concept. Screenshots of first GUIs of Apple Lisa prototypes show the early designs. Note also that Apple engineers visited the PARC facilities (Apple secured the rights for the visit by compensating Xerox with a pre-IPO purchase of Apple stock) and a number of PARC employees subsequently moved to Apple to work on the Lisa and Macintosh GUI. However, the Apple work extended PARC's considerably, adding manipulatable icons, and drag&drop manipulation of objects in the file system (see Macintosh Finder) for example. A list of the improvements made by Apple, beyond the PARC interface, can be read at Folklore.org. Jef Raskin warns that many of the reported facts in the history of the PARC and Macintosh development are inaccurate, distorted or even fabricated, due to the lack of usage by historians of direct primary sources.
In 1986 the Apple IIgs was launched, a very advanced model of the successful Apple II series, based on 16-bit technology (in fact, virtually two machines into one). It came with a new operating system, the Apple GS/OS, which features a Finder-like GUI, very similar to that of the Macintosh series, able to deal with the advanced graphic abilities of its Video Graphics Chip (VGC).
Graphical Environment Manager (GEM)
Digital Research (DRI) created the Graphical Environment Manager (GEM) as an add-on program for personal computers. GEM was developed to work with existing CP/M and MS-DOS operating systems on business computers such as IBM-compatibles. It was developed from DRI software, known as GSX, designed by a former PARC employee. The similarity to the Macintosh desktop led to a copyright lawsuit from Apple Computer, and a settlement which involved some changes to GEM. This was to be the first of a series of 'look and feel' lawsuits related to GUI design in the 1980s.
GEM received widespread use in the consumer market from 1985, when it was made the default user interface built into the Atari TOS operating system of the Atari ST line of personal computers. It was also bundled by other computer manufacturers and distributors, such as Amstrad. Later, it was distributed with the best-sold Digital Research version of DOS for IBM PC compatibles, the DR-DOS 6.0. The GEM desktop faded from the market with the withdrawal of the Atari ST line in 1992 and with the popularity of the Microsoft Windows 3.0 in the PC front around the same period of time.
Tandy's DeskMate appeared in the early 1980s on its TRS-80 machines and was ported to its Tandy 1000 range in 1984. Like most PC GUIs of the time, it depended on a disk operating system such as TRS-DOS or MS-DOS. The application was popular at the time and included a number of programs like Draw, Text and Calendar, as well as attracting outside investment such as Lotus 1-2-3 for DeskMate.
Acorn BBC Master Compact
Acorn's 8-bit BBC Master Compact shipped with Acorn's first public GUI interface in 1986. Little commercial software, beyond that included on the Welcome disk, was ever made available for the system, despite the claim by Acorn at the time that "the major software houses have worked with Acorn to make over 100 titles available on compilation discs at launch". The most avid supporter of the Master Compact appeared to be Superior Software, who produced and specifically labelled their games as 'Master Compact' compatible.
Amiga Intuition and the Workbench
The Amiga computer was launched by Commodore in 1985 with a GUI called Workbench. Workbench was based on an internal engine developed mostly by RJ Mical, called Intuition, which drove all the input events. The first versions used a blue/orange/white/black default palette, which was selected for high contrast on televisions and composite monitors. Workbench presented directories as drawers to fit in with the "workbench" theme. Intuition was the widget and graphics library that made the GUI work. It was driven by user events through the mouse, keyboard, and other input devices.
Due to a mistake made by the Commodore sales department, the first floppies of AmigaOS (released with the Amiga1000) named the whole OS "Workbench". Since then, users and CBM itself referred to "Workbench" as the nickname for the whole AmigaOS (including Amiga DOS, Extras, etc.). This common consent ended with release of version 2.0 of AmigaOS, which re-introduced proper names to the installation floppies of AmigaDOS, Workbench, Extras, etc.
Starting with Workbench 1.0, AmigaOS treated the Workbench as a backdrop, borderless window sitting atop a blank screen. With the introduction of AmigaOS 2.0, however, the user was free to select whether the main Workbench window appeared as a normally layered window, complete with a border and scrollbars, through a menu item.
Amiga users were able to boot their computer into a command line interface (aka CLI/shell). This was a keyboard-based environment without the Workbench GUI. Later they could invoke it with the CLI/SHELL command "LoadWB" which loaded Workbench GUI.
One major difference between other OS's of the time (and for some time after) was the Amiga's fully multi-tasking operating system, a powerful built-in animation system using a hardware blitter and copper and 4 channels of 26 kHz 8-bit sampled sound. This made the Amiga the first multi-media computer years before other OS's.
Like most GUIs of the day, Amiga's Intuition followed Xerox's, and sometimes Apple's, lead. But a CLI was included which dramatically extended the functionality of the platform. However, the CLI/Shell of Amiga is not just a simple text-based interface like in MS-DOS, but another graphic process driven by Intuition, and with the same gadgets included in Amiga's graphics.library. The CLI/Shell interface integrates itself with the Workbench, sharing privileges with the GUI.
The Amiga Workbench evolved over the 1990s, even after Commodore's 1994 bankruptcy.
Arthur / RISC OS
RISC OS // is a series of graphical user interface-based computer operating systems (OSes) designed for ARM architecture systems. It takes its name from the RISC (Reduced Instruction Set Computing) architecture supported. The OS was originally developed by Acorn Computers for use with their 1987 range of Archimedes personal computers using the Acorn RISC Machine processors. It comprises a command-line interface and desktop environment with a windowing system.
Originally branded as the Arthur 1.20 the subsequent Arthur 2 release was shipped under the name RISC OS 2.
From 1988 to 1998, the OS was bundled with nearly every ARM-based Acorn computer model, including the Archimedes range, RiscPC, NewsPad and A7000. A version of the OS (called NCOS) was used in Oracle's Network Computer and compatible systems. After the breakup of Acorn in 1998, development of the OS was forked and separately continued by several companies, including RISCOS Ltd, Pace Micro Technology and Castle Technology. Since 1998 it has been bundled with a number of ARM-based desktop computers such as the Iyonix and A9home. As of 2012[update], the OS remains forked and is independently developed by RISCOS Ltd and the RISC OS Open community.
Most recent stable versions run on the ARMv3/ARMv4 RiscPC (or under emulation via VirtualAcorn or RPCEmu), the ARMv5 Iyonix, Raspberry Pi and ARMv7 Cortex-A8 processors (such as that used in the BeagleBoard and Touch Book). In 2011, a port for the Cortex-A9 PandaBoard was announced
The WIMP interface incorporates three mouse buttons (named Select, Menu and Adjust), context-sensitive menus, window order control (i.e. send to back) and dynamic window focus (a window can have input focus at any position on the stack). The Icon bar (Dock) holds icons which represent mounted disc drives, RAM discs, running applications, system utilities and docked: Files, Directories or inactive Applications. These icons have context-sensitive menus and support drag-and-drop behaviour. They represent the running application as a whole, irrespective of whether it has open windows.
The GUI is centred around the concept of files. The Filer displays the contents of a disc. Applications are run from the Filer view and files can be dragged to the Filer view from applications to perform saves. Application directories are used to store applications. The OS differentiates them from normal directories through the use of a pling (exclamation mark, also called shriek) prefix. Double-clicking on such a directory launches the application rather than opening the directory. The application's executable files and resources are contained within the directory, but normally they remain hidden from the user. Because applications are self-contained, this allows drag-and-drop installation and removal.
The RISC OS Style Guide encourages a consistent look and feel across applications. This was introduced in RISC OS 3 and specifies application appearance and behaviour. Acorn's own main bundled applications were not updated to comply with the guide until 's Select release in 2001.
The outline font manager provides spatial anti-aliasing of fonts, the OS being the first operating system to include such a feature, having included it since before January 1989. Since 1994, in RISC OS 3.5, it has been possible to use an outline anti-aliased font in the WindowManager for UI elements, rather than the bitmap system font from previous versions.
MS-DOS file managers and utility suites
Because most of the very early IBM PC and compatibles lacked any common true graphical capability (they used the 80-column basic text mode compatible with the original MDA display adapter), a series of file managers arose, including Microsoft's DOS Shell, which features typical GUI elements as menus, push buttons, lists with scrollbars and mouse pointer. The name text-based user interface was later invented to name this kind of interface. Many MS-DOS text mode applications, like the default text editor for MS-DOS 5.0 (and related tools, like QBasic), also used the same philosophy. The IBM DOS Shell included with IBM DOS 5.0 (circa 1992) supported both text display modes and actual graphics display modes, making it both a TUI and a GUI, depending on the chosen mode.
Advanced file managers for MS-DOS were able to redefine character shapes with EGA and better display adapters, giving some basic low resolution icons and graphical interface elements, including an arrow (instead of a coloured cell block) for the mouse pointer. When the display adapter lacks the ability to change the character's shapes, they default to the CP437 character set found in the adapter's ROM. Some popular utility suites for MS-DOS, as Norton Utilities (pictured) and PC Tools used these techniques as well.
DESQview was a text mode multitasking program introduced in July 1985. Running on top of MS-DOS, it allowed users to run multiple DOS programs concurrently in windows. It was the first program to bring multitasking and windowing capabilities to a DOS environment in which existing DOS programs could be used. DESQview was not a true GUI but offered certain components of one, such as resizable, overlapping windows and mouse pointing.
Applications under MS-DOS with proprietary GUIs
Before the MS-Windows age, and with the lack of a true common GUI under MS-DOS, most graphical applications which worked with EGA, VGA and better graphic cards had proprietary built-in GUIs. One of the best known such graphical applications was Deluxe Paint, a popular painting software with a typical WIMP interface.
The original Adobe Acrobat Reader executable file for MS-DOS was able to run on both the standard Windows 3.x GUI and the standard DOS command prompt. When it was launched from the command prompt, on a machine with a VGA graphics card, it provided its own GUI.
Microsoft Windows (16-bit versions)
Windows 1.0, a GUI for the MS-DOS operating system was released in 1985. The market's response was less than stellar. Windows 2.0 followed, but it wasn't until the 1990 launch of Windows 3.0, based on Common User Access that its popularity truly exploded. The GUI has seen minor redesigns since, mainly the networking enabled Windows 3.11 and its Win32s 32-bit patch. The 16-bit line of MS Windows were discontinued with the introduction of Windows 95 and Windows NT 32-bit based architecture in the 1990s. See the next section.
The main window of a given application can occupy the full screen in maximized status. The users must then to switch between maximized applications using the Alt+Tab keyboard shortcut; no alternative with the mouse except for de-maximize. When none of the running application windows are maximized, switching can be done by clicking on a partially visible window, as is the common way in other GUIs.
In 1988, Apple sued Microsoft for copyright infringement of the LISA and Apple Macintosh GUI. The court case lasted 4 years before almost all of Apple's claims were denied on a contractual technicality. Subsequent appeals by Apple were also denied. Microsoft and Apple apparently entered a final, private settlement of the matter in 1997.
GEOS was launched in 1986. Originally written for the 8-bit home computer Commodore 64 and shortly after, the Apple II series it was later ported to IBM PC systems. It came with several application programs like a calendar and word processor, and a cut-down version served as the basis for America Online's DOS client. Compared to the competing Windows 3.0 GUI it could run reasonably well on simpler hardware, but its developer had a restrictive policy towards third-party developers that prevented it from becoming a serious competitor. And it was targeted at 8-bit machines and the 16-bit computer age was dawning.
The X Window System
The standard windowing system in the Unix world is the X Window System (commonly X11 or X), first released in the mid-1980s. The W Window System (1983) was the precursor to X; X was developed at MIT as Project Athena. Its original purpose was to allow users of the newly emerging graphic terminals to access remote graphics workstations without regard to the workstation's operating system or the hardware. Due largely to the availability of the source code used to write X, it has become the standard layer for management of graphical and input/output devices and for the building of both local and remote graphical interfaces on virtually all Unix, Linux and other Unix-like operating systems, with the notable exception of Mac OS X and Android.
X allows a graphical terminal user to make use of remote resources on the network as if they were all located locally to the user by running a single module of software called the X server. The software running on the remote machine is called the client application. X's network transparency protocols allow the display and input portions of any application to be separated from the remainder of the application and 'served up' to any of a large number of remote users. X is available today as free software.
The PostScript-based NeWS (Network extensible Window System) was developed by Sun Microsystems in the mid-1980s. For several years SunOS included a window system combining NeWS and the X Window System. Although NeWS was considered technically elegant by some commentators, Sun eventually dropped the product. Unlike X, NeWS was always proprietary software.
The 1990s: Mainstream usage of the desktop
The widespread adoption of the PC platform at homes and small business popularized computers among people with no formal training. This created a fast-growing market, opening an opportunity for commercial exploitation and of easy-to-use interfaces and making economically viable the incremental refinement of the existing GUIs for home systems.
Also, the spreading of Highcolor and True Color capabilities of display adapters providing thousands and millions of colors, along with faster CPUs and accelerated graphic cards, cheaper RAM, storage devices up to an order of magnitude larger (from megabytes to gigabytes) and larger bandwidth for telecom networking at lower cost helped to create an environment in which the common user was able to run complicated GUIs which began to favor aesthetics.
Windows 95 and "a computer in every home"
After Windows 3.11, Microsoft began to develop a new consumer-oriented version of the operating system. Windows 95 was intended to integrate Microsoft's formerly separate MS-DOS and Windows products and included an enhanced version of DOS, often referred to as MS-DOS 7.0. It also featured a significant redesign of the GUI, dubbed "Cairo". While Cairo never really materialized, parts of Cairo found their way into subsequent versions of the operating system starting with Windows 95. Both Win95 and WinNT could run 32-bit applications, and could exploit the abilities of the Intel 80386 CPU, as the preemptive multitasking and up to 4GiB of linear address memory space. Windows 95 was touted as a 32-bit based operating system but it was actually based on a hybrid kernel (VWIN32.VXD) with the 16-bit user interface (USER.EXE) and graphic device interface (GDI.EXE) of Windows for Workgroups (3.11), which had 16-bit kernel components with a 32-bit subsystem (USER32.DLL and GDI32.DLL) that allowed it to run native 16-bit applications as well as 32-bit applications. In the marketplace, Windows 95 was an unqualified success, promoting a general upgrade to 32-bit technology, and within a year or two of its release had become the most successful operating system ever produced.
Windows 95 saw the beginning of the Browser wars, when the World Wide Web began receiving a great deal of attention in the popular culture and mass media. Microsoft at first did not see potential in the Web, and Windows 95 was shipped with Microsoft's own online service called The Microsoft Network, which was dial-up only and was used primarily for its own content, not internet access. As versions of Netscape Navigator and Internet Explorer were released at a rapid pace over the following few years, Microsoft used its desktop dominance to push its browser and shape the ecology of the web mainly as a Monoculture (computer science).
Windows 95 evolved through the years into Windows 98 and Windows ME. Windows ME was the last in the line of the Windows 3.x-based operating systems from Microsoft. Windows underwent a parallel 32-bit evolutionary path, where Windows NT 3.1 was released in 1993. Windows NT (for New Technology) was a native 32-bit operating system with a new driver model, was unicode-based, and provided for true separation between applications. Windows NT also supported 16-bit applications in an NTVDM, but it did not support VXD based drivers. Windows 95 was supposed to be released before 1993 as the predecessor to Windows NT. The idea was to promote the development of 32-bit applications with backward compatibility - leading the way for more successful NT release. After multiple delays, Windows 95 was released without unicode and used the VXD driver model. Windows NT 3.1 evolved to Windows NT 3.5, 3.51 and then 4.0 when it finally shared a similar interface with its Windows 9x desktop counterpart and included a START button. The evolution continued with Windows 2000, Windows XP, Windows Vista, then Windows 7. Windows XP and higher were also made available in 64-bit modes. Windows server products branched off with the introduction of Windows Server 2003 (available in 32-bit and 64-bit IA64 or x64), then Windows Server 2008 and then Windows Server 2008 R2. Windows 2000 and XP shared the same basic GUI although XP introduced Visual Styles. With Windows 98, the Active Desktop theme was introduced, allowing an HTML approach for the desktop, but this feature was coldly received by customers, who frequently disabled it. At the end, Windows Vista definitively discontinued it, but put a new SideBar on the desktop.
The Macintosh's GUI has been infrequently revised since 1984, with major updates including System 7 and Mac OS 8. It underwent its largest revision with the introduction of the "Aqua" interface in 2001's Mac OS X. It was a new operating system built primarily on technology from NeXTStep with UI elements of the original Mac OS grafted on. Mac OS X uses a technology known as Quartz (graphics layer), for graphics rendering and drawing on-screen. Some interface features of Mac OS X are inherited from NeXTStep (such as the Dock, the automatic wait cursor, or double-buffered windows giving a solid appearance and flicker-free window redraws), while others are inherited from the old Mac OS operating system (the single system-wide menu-bar). Mac OS X v10.3 introduced features to improve usability including Exposé, which is designed to make finding open windows easier.
With Mac OS X v10.4, new features were added, including Dashboard (a virtual alternate desktop for mini specific-purpose applications) and a search tool called Spotlight, which provides users with an option for searching through files instead of browsing through folders.
GUIs built on the X Window System
In the early days of X Window development, Sun Microsystems and AT&T attempted to push for a GUI standard called OPEN LOOK in competition with Motif. OPEN LOOK was a well-designed standard developed from scratch in conjunction with Xerox, while Motif was a collective effort that fell into place, with a look and feel patterned after Windows 3.11. Many who worked on OPEN LOOK at the time appreciated its design coherence. Motif prevailed in the UNIX GUI battles and became the basis for the Common Desktop Environment (CDE). CDE was based on VUE (Visual User Environment), a proprietary desktop from Hewlett-Packard that in turn was based on the Motif look and feel.
In the late 1990s, there was significant growth in the Unix world, especially among the free software community. New graphical desktop movements grew up around Linux and similar operating systems, based on the X Window System. A new emphasis on providing an integrated and uniform interface to the user brought about new desktop environments, such as KDE Plasma Desktop, GNOME and XFCE which are supplanting CDE in popularity on both Unix and Unix-like operating systems. The XFCE, KDE and GNOME look and feel each tend to undergo more rapid change and less codification than the earlier OPEN LOOK and Motif environments.
Later releases added improvements over the original Workbench, like support for high-color Workbench screens, context menus, and embossed 2D icons with pseudo-3D aspect. Some Amiga users preferred alternative interfaces to standard Workbench, such as Directory Opus Magellan.
The use of improved, third-party GUI engines became common amongst users who preferred more attractive interfaces – such as Magic User Interface (MUI), and ReAction. These object-oriented graphic engines driven by user interface classes and methods were then standardized into the Amiga environment and changed Amiga Workbench to a complete and modern guided interface, with new standard gadgets, animated buttons, true 24-bit-color icons, increased use of wallpapers for screens and windows, alpha channel, transparencies and shadows as any modern GUI provides.
Modern derivatives of Workbench are Ambient for MorphOS, Scalos, Workbench for AmigaOS 4 and Wanderer for AROS. There is a brief article on Ambient and descriptions of MUI icons, menus and gadgets at aps.fr and images of Zune stay at main AROS site.
Use of object oriented graphic engines dramatically changes the look and feel of a GUI to match actual styleguides.
Originally collaboratively developed by Microsoft and IBM to replace DOS, OS/2 version 1.0 (released in 1987) had no GUI at all. Version 1.1 (released 1988) included Presentation Manager (PM), which looked a lot like the later Windows 3.0 UI. After the split with Microsoft, IBM developed the Workplace Shell (WPS) for version 2.0 (released in 1992), a quite radical, object-oriented approach to GUIs. Microsoft later imitated some of this look in Windows 95.
The NeXTSTEP user interface was used in the NeXT line of computers. NeXTSTEP's first major version was released in 1989. It used Display PostScript for its graphical underpinning. The NeXTSTEP interface's most significant feature was the Dock, carried with some modification into Mac OS X, and had other minor interface details that some found made it easier and more intuitive to use than previous GUIs. NeXTSTEP's GUI was the first to feature opaque dragging of windows in its user interface, on a comparatively weak machine by today's standards, ideally aided by high performance graphics hardware.
BeOS was developed on custom AT&T Hobbit-based computers before switching to PowerPC hardware by a team led by former Apple executive Jean-Louis Gassée as an alternative to Mac OS. BeOS was later ported to Intel hardware. It used an object-oriented kernel written by Be, and did not use the X Window System, but a different GUI written from scratch. Much effort was spent by the developers to make it an efficient platform for multimedia applications. Be Inc. was acquired by PalmSource, Inc. (Palm Inc. at the time) in 2001. The BeOS GUI still lives in Haiku, an open source software reimplementation of the BeOS.
|This article's factual accuracy may be compromised due to out-of-date information. (November 2012)|
3D user interface
As of 2009, a trend in desktop technology is the inclusion of 3D effects in window management. It's based in experimental research in User Interface Design trying to expand the expressive power of the existing toolkits in order to enhance the physical cues that allow for direct manipulation. New effects common to several projects are scale resizing and zooming, several windows transformations and animations (wobbly windows, smooth minimization to system tray...), composition of images (used for window drop shadows and transparency) and enhancing the global organization of open windows (zooming to virtual desktops, desktop cube, Exposé, etc.) The proof-of-concept BumpTop desktop combines a physical representation of documents with tools for document classification possible only in the simulated environment, like instant reordering and automated grouping of related documents.
These effects are popularized thanks to the widespread use of 3D video cards (mainly due to gaming) which allow for complex visual processing with low CPU use, using the 3D acceleration in most modern graphics cards to render the application clients in a 3D scene. The application window is drawn off-screen in a pixel buffer, and the graphics card renders it into the 3D scene.
This can have the advantage of moving some of the window rendering to the GPU on the graphics card and thus reducing the load on the main CPU, but the facilities that allow this must be available on the graphics card to be able to take advantage of this.
Examples of 3D user-interface software include XGL and Compiz from Novell, and AIGLX bundled with Red Hat Fedora. Quartz Extreme for Mac OS X and Windows 7 and Vista's Aero interface use 3D rendering for shading and transparency effects as well as Expose and Windows Flip and Flip 3D, respectively. Windows Vista uses Direct3D to accomplish this, whereas the other interfaces use OpenGL.
At the IEEE 7th Symposium on 3D User Interfaces, Ph.D. student Mengu Sukan, M.S. student Semih Energin and Prof. Steve Feiner won best poster for research and development of augmented reality, titled "Manipulating Virtual Objects in Hand-Held Augmented Reality using Stored Snapshots." The poster presents a set of interaction techniques that allow a user to first take snapshots of a scene using a tablet computer, and then jump back and forth between the snapshots, to revisit them virtually for interaction. By storing for each snapshot a still image of the scene, along with the camera position and orientation determined by computer vision software, this approach allows the overlaid 3D graphics to be dynamic and interactive. This makes it possible for the user to move and rotate virtual 3D objects from the vantage points of different locations, without the overhead of physically traveling between those locations. 3DUI attendees tried a real-time demo in which they laid out virtual furniture. They could rapidly transition between the live view and the viewpoints of multiple snapshots, as they moved and rotated items of virtual furniture, iteratively designing a desired layout.
Portable devices such as MP3 players and cell phones have been a burgeoning area of deployment for GUIs in recent years. Since the mid-2000s, a vast majority of portable devices have advanced to having high-screen resolutions and sizes. (The iPhone 5's 1,136 × 640 pixel display is an example). Because of this, these devices have their own famed user interfaces and operating systems that have large homebrew communities dedicated to creating their own visual elements, such as icons, menus, wallpapers, and more. Post-WIMP interfaces are often used in these mobile devices, where the traditional pointing devices required by the desktop metaphor are not practical.
As high powered graphics hardware draws considerable power and generates significant heat, many of the 3d effects developed between 2000 and 2010 are not practical on this class of device. This has led to the development of simpler interfaces making a design feature of two dimensionality such as exhibited by Metro and the 2012 Gmail redesign.
||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (August 2009)|
- "The computer mouse turns 40". Retrieved 2012-06-12.
- Clive Akass. "The men who really invented the GUI".
- History of PARC
- Mike Tuck. "The Real History of the GUI".
- "On Xerox, Apple and Progress" (1996), Folklore.org.
- Jef Raskin. "Holes in Histories".
- Chris's Acorns: Master Compact
- [Acorn User October 1986 - News - Page 9]
- "About us: RISC OS Open Limited FAQ". RISC OS Open. Retrieved 2011-06-13.
- "Acorn announces distribution deal with Castle Technology for RISC based products". Press release (Acorn Computers Ltd). 1998-10-12. Archived from the original on 6 May 1999. Retrieved 2011-01-06. "(October 12th 1998), Cambridge, UK-Acorn announced today that it has completed negotiations with Castle Technology for them to distribute Acorn products."
- "Risc os 6 general faq". RISCOS Ltd. Retrieved 2011-01-31. "[RISC OS 6 is] suitable for Risc PC, A7000 and Virtual Acorn products."
- "RISC OS 5 features". Iyonix Ltd. Retrieved 2011-01-31. "All IYONIX pcs ship with RISC OS 5 in flash ROM."
- Lee, Jeffrey. "Newsround". The Icon Bar. Retrieved 17 October 2011.
- Holwerda, Thom (31 October 2011). "Raspberry Pi To Embrace RISC OS". OSNews. Retrieved 1 November 2011.
- Dewhurst, Christopher (December 2011). "The London show 2011". Archive (magazine) 23 (3). p. 3.
- Farrell, Nick (2009-04-27). "Snaps leak of RISC OS5 on Beagleboard". The Inquirer. Retrieved 2011-06-28. "A snap of an RISC OS 5, running on a Beagleboard device powered by a 600MHz ARM Cortex-A8 processor with a built-in graphics chip, has tipped up on the world wide wibble. The port developed by Jeffrey Lee is a breakthrough for the shared-source project because it has ported the OS without an army of engineers."
- "Cortex-A8 port status". RISC OS Open. Retrieved 2011-01-31. "[The port includes] a modified version of the RISC OS kernel containing support for (all) Cortex-A8 CPU cores."
- Lee, Jeffrey (2011-08-02). "Have I Got Old News For You". The Icon Bar. Retrieved 28 September 2011. "[...] Willi Theiss has recently announced that he's been working on a port of RISC OS to the PandaBoard [...]"
- Mellor, Phil (2007-03-23). "An arbitrary number of possibly influential RISC OS things". The Icon Bar. Retrieved 27 September 2011. "Admittedly it wasn't until RISC OS Select was released, almost 10 years later, that the standard Acorn applications (Draw, Edit, and Paint) implemented the style guide's clipboard recommendations, but most products followed it with care."
- Round, Mark (2004-02-26). "Emulating RISC OS under Windows". OSnews. OSNews. Retrieved 2011-05-12. "Many of the UI concepts that we take for granted were first pioneered in RISC OS, for instance: scalable anti-aliased fonts and an operating system extendable by 'modules', while most of the PC world was still on Windows 3.0."
- Ghiraddje (2009-12-22). "The RISC OS GUI". Telcontar.net. Retrieved 2011-05-12. "Only with Mac OS X did any mainstream graphical interface provide the smoothly rendered, fractionally spaced type that Acorn accomplished in 1992 or earlier."
- Reimer, Jeremy (2005-05). "A History of the GUI". ArsTechnica. Retrieved 2011-05-25. "[...] in 1987, the UK-based company Acorn Computers introduced their [...] GUI, called "Arthur", also was the first to feature anti-aliased display of on-screen fonts, even in 16-color mode!"
- Holwerda, Thom (2005-06-23). "Screen Fonts: Shape Accuracy or On-Screen Readability?". OSNews. Retrieved 2011-06-13. "[...] it was RISC OS that had the first system-wide, intricate [...] font rendering in operating systems."
- Pountain, Dick (1988-12). "Screentest: Archie RISC OS". Personal Computer World. p. 154. Retrieved 2011-01-14. "[ArcDraw] can also add text in multiple sizes and fonts to a drawing (including anti-aliased fonts)"
- Acorn Computers Support Group Application Notice 253 - New features of RISC OS version 3.5
- "how-windows-came-to-be-windows-1". sbp-romania.com. Retrieved October 3, 2011.
- "history-computer.com". http://history-computer.com. Retrieved October 3, 2011.
- Dedual, Nicolas (2012-03-08). "Sukan, Feiner, and Energin receive Best Poster Award at IEEE 3DUI 2012" (announcement). Columbia University. Retrieved 3 April 2013.
- Jeremy Reimer. "A History of the GUI" Ars Technica. May 5, 2005.
- "User Interface Timeline" George Mason University
- Nathan Lineback. "The Graphical User Interface Gallery". Nathan's Toasty Technology Page.
- Oral history interview with Marvin L. Minsky, Charles Babbage Institute, University of Minnesota. Minsky describes artificial intelligence (AI) research at the Massachusetts Institute of Technology (MIT), including research in the areas of graphics, word processing, and time-sharing.
- Oral history interview with Ivan Sutherland, Charles Babbage Institute, University of Minnesota. Sutherland describes his tenure as head of ARPA's Information Processing Techniques Office (IPTO) from 1963 to 1965, including new projects in graphics and networking.
- Oral history interview with Charles A. Csuri, Charles Babbage Institute, University of Minnesota. Csuri recounts his art education and explains his transition to computer graphics in the mid-1960s, after receiving a National Science Foundation grant for research in graphics.
- GUIdebook: Graphical User Interface gallery
- VisiOn history – The first GUI for the PC
- mprove: Historical Overview of Graphical User Interfaces
- Anecdotes about the development of the Macintosh Hardware & GUI | 1 | 20 |
<urn:uuid:35dba0cb-5fbe-49f8-9e4a-93eb3c3995ab> | Glossary for C
A term of dating that means the Common Era. Used by scientists, scholars, and Pagans as a more appropriate alternative to the religious A.D., Latin for “Year of our Lord.”
Cabala...is a system of mysticism with its origins in Judaism, stemming in part from the "chariot" visions of first-century mystics, in part from Gnosticism and Neoplatonism, in part from the theological speculations of medieval Spanish Jews, and in part from later thinkers. For many centuries, cabala was the accepted form of mysticism and theology within Judaism, but for the most part it has now fallen out of favor in religious contexts. Nevertheless, many rabbis and Jewish scholars still take an interest in it. As a philosophy and as a way of looking at God and the universe, it survives in yet wider quarters. Especially in the form developed by Christian enthusiasts in the Italian Renaissance and by 19th-century Christian and pagan occultists, cabalism retains vast importance as the key to mystical thinking outside of the mainstream and to the practice of ceremonial magic.
Also See: Kabbalah, Qabalah, Qabala
A large enclosure with a curtain in the front. Usually, they were big enough to hold a chair and a table. During a séance, a medium would be bound and secured within the cabinet, often tied to a chair or post. Objects such as tambourines and bells would be placed on the table. With the curtain closed spirits would supposedly “come through” the medium in some way to play the instruments, move things around, cause lights to flash, etc. Some magic entertainers have been able to simulate this as part of their show.
Perhaps the most famous use of a cabinet was a variation made by Houdini in an attempt to catch the medium known as “Margery” in fraud. He put her in a box with holes for her head and arms in order to control her. Houdini claimed he proved her to be a fake, but his assistant later admitted to planting false evidence on her by Houdini’s instructions.
A fashioned talisman that usually has a convex shape on one side and which is flat on the other, although any fashioning that does not involve faceting or tumbling is often considered to be making a cabochon. They are often of oval shape, but irregular shapes are common.
A type of evil spirit, cacodemons were capable of shapeshifting. In Enochian magick there are 1,024 cacodemons who do the work that creates the universe. They are the negative counterpart of agathodemons.
Cadent houses in the birth chart (3rd, 6th, 9th, and 12th) are said to be less forceful; planets placed in these houses may operate on a more mental or detached level.
(A winged wand entwined by two serpents.) Symbol of Mercury, messenger of the gods, the wand represents power, the snakes represent wisdom, and the wings represent diligence. Also, the wand represents earth, the wings represent air, and the serpents represent fire and water.
A human-made pile of rocks. They may be very simple or quite elaborate and often are made in the form of a cone. In modern times they are used to mark such things as a path or the top of a mountain. In ancient times they may have marked burial sites, astrological sites, or as aids for hunting.
Cakes and Wine
A small “feast” marking the end to many Wiccan and Pagan rituals. The food of the feats consists of “cake” (usually a form of bread) and wine or wine substitute. Sometimes, instead of wine, some groups will use ale.
A term for a summoning, sometimes used to indicate the summoning of a spirit as part of an invocation or evocation.
When testing for ESP using such things as guessing which playing card an investigator is holding, each response to a question such as “What card am I holding?” is known as a “call.”
Calling the Quarters
Usually performed near the start of a ritual, the practice of evoking protective entities, deities, or powers that correspond to the specific energies of the cardinal directions as determined by the beliefs of the tradition being followed.
Pre-Hellenic goddess who was the personification of the force of instinct. In Greece, she became associated with being an eternally virginal nymph of Artemis. When Artemis discovers that she was seduced and impregnated by Zeus, she turns Callisto into a bear and puts her in the stars as the constellation, Ursa Major.
Calypso Moon Language
I am rather at a loss to give definition to this. Culling calls it a quasi-Enochian Language. Researching, I find that Calypso is a West Indian musical style influenced by jazz; it’s also a small species of orchid (Calypso borealis), having a flower variegated with purple, pink, and yellow that grows in cold, bog-like localities in the northern part of the United States. It is also the name of a tiny moon of the planet Saturn, discovered in 1980, and in 1983 named for the goddess Calypso who detained Odysseus for seven years in Homer’s Odyssey. And, finally, it is a fashion in which women tie a knot in their shirt and exposes her waist. There is some indication that it closely resembles modern Greek.
Acronym for Complementary and Alternative Medicine. This is the currently popular term to describe all non-allopathic forms of healing, ranging from the laying on of hands and acupuncture to Reiki and hypnotherapy.
An officer of the Hermetic Order of the Golden Dawn. In a full temple, this officer is present on the dais when the temple is open for Neophytes. The purpose of this officer (the name is Latin for “chancellor”) is to keep records. When this office is held by a woman, she is called the Cancellaria.
The best quality of Cancer is the ability to nurture the self and others. The worst quality is holding on to things too tightly, or smothering. A key phrase is "I feel.” The Cancer personality is family- and home-oriented. The emotional well-being of the home environment is key to Cancer's emotional balance, which is even-tempered when it flows. When opposed, the Cancerian temperament can display other water characteristics, like freezing or flood stage emotions.
Never let the emotional side of Cancer fool you into thinking you are not leaders – this sign provides leadership in the feeling arena and can be influential in all areas where subjective feelings are important.
The Cancerian mind will often ask "how does this feel?" before deciding what action to take. You are true to your belief system and may be difficult to steer into any activity you cannot support on the "gut" level. This attention to inner feelings puts you in a good position in industries that appeal to the mass market, as you don't lose sight of individual preferences in the pursuit of the big picture.
Cancer is a water sign. Water takes the shape of the vessel that contains it, and water runs downhill. The Cancer temperament will go with the flow when that is convenient, and can be quite happy if the vocational, relationship, or recreational container suits the individual. Sometimes you surprise people, though, when you resist going a certain direction "on principle." Stick to those principles, as you set you apart from people who either don't seem to know what you really believe, or are not able to hold your ground in the face of opposition.
Generally you respond to others in a caring or nurturing context. You may consider what will help move a process along, or you decide what people should wear or eat, based on your needs, not yours. At the same time you can be a skillful manager, helping others to map out a clear, well-defined process for your activities. You are good at keeping projects on track – you know how to find the strongest current in the river, and then keep your craft headed into that current.
Key Phrase: I FEEL
Cancer is expressed through feeling, and can be purely emotional. It embodies the qualities of protection and tenderness, and shows a high degree of sensitivity. As the Cardinal Water sign of the zodiac, Cancer initiates emotions, both in the self and others. Like its symbol, the Crab, Cancer can be indirect and defensive, and may even show a tendency toward manipulation and passive-aggression. The most patriotic of the signs, Cancer will defend home and country, and has a soft spot for mothers. Sensitive, sympathetic, and intuitive, this is the sign associated with mothering and lunar energy. When carried to an extreme, it can be smothering, moody, and suspicious. Where Cancer is found in your chart, there is receptivity and a need to be emotionally connected.
“The use of candles in magic dates back many centuries, but the specific system of magic in which colored, anointed candles are the primary tools is a relatively recent innovation, developed in the nineteenth century out of Catholic devotional practices using candles of various kinds. The southern United States, with its rich heritage of hoodoo magic and African tradition, seems to be the homeland of candle magic, with New Orleans probably the original place of invention.
“The basic practices of candle magic involve a detailed color symbolism in which red candles represent sexual desire, green stand for money, white for spirituality and healing, black for cursing and banishing, and so forth. Candles used in magic are “dressed” or anointed with specific oils, which are typically rubbed onto the candle from the middle out to both ends. A candle magic working may simply involve lighting one or more candles and reciting a charm while it burns; it may also involve rearranging candles on an altar to represent the rearrangement in the world that the working is intended to bring about.
“Originally, candle magic was mostly practiced among Southern folk magicians of various kinds, but in recent decades it has spread far more widely. Many Witches and Pagans in the current Pagan revival movements make use of it, as do a great many occultists who simply picked up a book on the subject and found it to their liking.”
A preparatory OBE exercise helpful in developing concentration skills required to induce and guide the experience.
A Pagan festival traditionally celebrated on the second of February, also known as Imbolc, Lupercus, etc. Dedicated to the Goddess of Fertility and the Horned God, it was anciently known as a feast for the god Pan. Often seen as a celebration of change and of removing of the old to make way for the new.
Candomblé is an Afro-Brazilian syncretic religion that, like Santería and Vodou, has its roots in the ancient religion of Ifá and is also influenced by European spiritualist practices and indigenous folk wisdom along with Christianity. The worship and service to the Orixás (deities) and to the Egungun (ancestors) are the core beliefs, along with the practice of rituals to enhance every aspect of life and divination to communicate with the Orixás.
A Scottish term meaning a magick spell, especially as used by Witches. It may also be a minor spell and possibly mischievous. It has been used in novels and role-playing games (RPGs) with varying definitions.
Divination through the interpretation of smoke.
Divination by smoke, also known as Libanomancy and Thurifumia, involves observation of smoke as it rises from a fire. While you can add your own complexities, generally smoke that rises straight up the answer is positive, if it instead hangs heavily over the fire, the answer is negative. Another method it to watch the smoke from extinguished candles. If smoke moves to the right, the answer is positive, if to the left it is negative.
The best quality of Capricorn is diplomacy. The worst quality is deceitfulness. A key phrase is “I utilize.” To understand Capricorn, one must understand that there is not much difference between diplomacy and deceit. Capricorns tend to be honest and conscientious in your dealings with other people, but you may learn through experience to not say everything you know. You have the capacity to take the practical path to a material goal and leave the precise truth to someone else. That said, Capricorns are responsible, self-disciplined individuals who can be very patient in the pursuit of your goals, and you generally act on a well-defined sense of moral right and wrong. You recognize and accept duty as a part of life.
Thoughtful and methodical, Capricorns are the masters of synthesis. You are methodical and organized in your thinking. Persistence is a quality that you cultivate. You find power in self-control and mental concentration.
By temperament Capricorns are cautious. You are subtle about how you gather the information you need, and you are subtle about how you apply your efforts to any task. You make good managers because of your excellent sense of organization, but you can brood or be overly exacting in what you expect. You are able to adapt situations to your own needs. You tend to be somewhat conventional in dress and demeanor.
Capricorns are able to take advantage of circumstances. You are mentally prepared to take action when the time is right, and you are efficient in your actions. You can appear unsympathetic to the needs of others, yet you faithfully fulfill what you see as your duty. While you sometimes seem rigid or selfish in your behavior, you are capable of self-sacrifice and are not unjust in your actions. Going back to the key phrase, “I utilize,” it is helpful to remember that Capricorns make skillful use of the people and situations around you, and you are generally not concerned about the popularity of your actions.
Key Phrase: I USE
The most ambitious sign of the zodiac, Capricorn is focused, cautious, and sensible. As the Cardinal Earth sign of the zodiac, Capricorn knows how to make the best use of the material, physical plane. The patience and discipline seen in Capricorn can help to provide a stable foundation for growth. But Capricorn can also resist change and become controlling, inhibited, and rigid. Symbolized by the Goat, Capricorn prefers the sure-footed path toward ascension, and will stand firm for a long time rather than back down. When hurt, Capricorn can be melancholy; when uncertain, driven by fears. Sometimes thought of as miserly, Capricorn rather prefers to be responsible and frugal, making the best use of all resources. Through its connection with the energy of Saturn, Capricorn desires to be the quintessential authoritarian. Where you see Capricorn in your chart, you find a need for structure and conscientious effort.
Captain of Your Own Ship
The personal "commandment" that each person should take full authority and responsibility of his or her being. Without question, it involves more than the modern perceptions of democratic government and capitalism for it also makes demands upon both to provide each individual with certain basics of education, the rule of law, security of person, health, public services, etc.
In geomantic divination, a figure of two dots above three vertically aligned single dots. The phrase itself is Latin for “head of the dragon.” In a divination it is positive for beginnings. It also enhances things, such as being favorable for favorable figures, and unfavorable for unfavorable ones.
In geomantic divination, a figure of one dot above two sets of two dots above one dot. The phrase itself is Latin for “tail of the dragon.” In a divination it is unlucky for most things, except when having to do with security.
A basic form of ESP testing. It uses a deck of cards (such as regular playing cards or Zener cards) and repeated rounds of testing. This technique can be used to test for precognition, by having the subject guess a card before it is revealed, telepathy, by having the subject guess a card being looked at by the tester, etc.
Assertive, ambitious, impatient. The cardinal signs are Aries, Cancer, Libra, and Capricorn.
Also known as the “cardinal directions,” they are the North, South, East, and West. Many magickal rituals are directed toward one of the cardinal points, and some rituals have work done at each point. In mathematics, points dividing the cardinal points, such as the northeast, southwest, etc., are known as “ordinal points,” but in magickal traditions they are frequently called “cross-quarter points.”
Carnelian is a translucent orange-red stone. It is a cryptocrystalline quartz, composed of silica. It is found in India and South America.
In ancient times carnelian was thought to still the blood and soften anger. It is a gem of the Earth, a symbol of the strength and beauty of our planet. It is good for people who are absent minded, confused, or unfocused. It strengthens the voice and helps one become more eloquent and charitable. Carnelian carries the stories and records of our Earth and can be used to see into the past. This stone symbolizes good luck and contentment.
This gem does good things to the body just by wearing it, as it feeds energy molecules directly through the skin, just as one can breathe in prana by inhaling air. Carnelian is one of the few gems that harmonizes effectively with the elements of fire and earth today. It helps cleanse the liver if you hold the stone over the liver and massage the area.
A type of divination, also know as ceromancy, that uses melted wax. Traditionally, the wax would be melted in a cup or bowl, and then poured into a container of cold water, causing it to solidify. The shapes it takes, along with the movements in the water, would then be interpreted.
Divination with cards, such as the Tarot.
Divination by reading patterns on the feet; similar to palmistry.
Also known as Caryatis. In ancient Greece, a walnut tree goddess.
Case, Paul Foster
Founder of the Builders of the Adytum (BOTA), an esoteric teaching center in Los Angeles also known for mail order teachings. He was a former member of the Hermetic Order of the Golden Dawn, and the author of The Tarot, A Key to the Wisdom of the Ages, and a book of Tarot card meditations, The Book of Tokens. Both books were recommended by author Louis T. Culling.
Lord of Saturn and the sign of Capricorn. He is also Ruler of the Seventh Heaven. He helps people understand patience and encourages them to overcome longstanding obstacles and problems. He provides serenity and teaches temperance. Cassiel is associated with karma, and helps people understand the law of cause and effect. Because of his association with Saturn, Cassiel works slowly. As it takes Saturn four years to orbit the Sun, Cassiel can take up to years resolve a problem. Fortunately, Raphael is willing to talk with Cassiel to speed the process up.
According to ancient Greek mythology, Castalia was a nymph whom Apollo turned into a fountain at Delphi, home of the famous Oracle. Those who drank her waters would receive artistic (especially poetic) inspiration.
A form of divination where small objects such as stones, twigs, runes, I Ching stalks, coins, etc. are tossed onto the ground or onto some flat surface and the results are interpreted.
Casting the Circle
The practice of using a ritual to spiritual build a magick circle (as opposed to the physical movement of objects into the area) for use in magickal rituals and religious rites. Some say it is the activating of the spiritual aspects of a Wiccan or Pagan Temple.
The technical name is chatoyancy. This is caused by the dispersion of light due to tiny parallel “needles” within a crystal.
A state wherein the body, or parts of the body, shows rigidity, and is not effected by exterior stimuli. Often used by hypnotists to determine the level of hypnotic trance. A common example it to have a hypnotized person hold their arm out straight. They will keep it extended and resist having it pushed down until given a suggestion to move it.
Scientific theory that some of the earth’s landforms were shaped by major catastrophes, such as very large floods, in the distant past.
Divination by gazing in a mirror (such as a black scrying mirror).
Divination with mirrors. There are two basic forms: One uses mirrors made from special metals associated with planetary lore, and the other special gazing mirrors used somewhat like crystal balls.
Consulting a reflective mirror means to focus on the question or matter of concern, asking specific questions if possible, and then letting yourself slide into a light trance to find the answers.
A form of divination that uses vessels of metals, especially brass because of its supposed anti-demonic qualities. A fluid would be poured into the vessel and the diviner, perhaps in a trance, looks into the fluid and has prophetic visions.
In geomantic divination, a figure of three vertically aligned dots above two dots. The word itself is Latin for “acquiring.” In a divination it means positive endings and can help out with losses. It’s generally considered a negative sign.
The cauldron is a large metal vessel, usually made of iron. It is seen to be symbolic of the Goddess. Fires may be lit within it, or the cauldron may be filled with water and flowers. Despite popular misconceptions, brews are rarely created in the cauldron.
An ancient Tantric concept adopted by Theosophy and brought into contemporary occult thought. There are supposedly multiple bodies or sheaths that make up a person (anciently there were 5, but the Theosophists made this 7). The causal body is the highest, most ethereal body. It is a veil for the true soul.
This principle states that equal benefit comes from action towards a goal and inaction away from it. The causes of our success are both to be found in efforts to move forward and from the absence of efforts to move backwards.
Like the other five planes of the three lower levels, there is “substance” as well as “laws” relating to its specific nature. The Causal shares the third level with the Mental Plane—which likewise has its unique substance, but the two share the characteristic of five dimensions (in contrast to the familiar three of the Physical and Etheric Planes of the first and lowest level).
Divination by throwing an object in the fire and determining “yes” or “no” based on whether or not the object burned.
Divination by Fire, also known as Pyromancy. Fire is the most potent of the natural elements because it causes change, and is directly linked with the Sun, and hence with the earth and growing plants. Yet, fire can also lead to disasters. Fire Gazing involves a light and spontaneous trance during which images and symbols may be seen in the fire. Fire Reading, in contrast, is more direct. After the fire has been lit, stir it, and:
An all male occult society in Italy. Linked to the Luperci.
The Sleep-Learning program, also called Celestial Learning, is based on the premise that bodies of knowledge presently unavailable to conscious awareness exist in the astral realm and can be acquired through spontaneous out-of-body travel during sleep. The program recognizes individual differences in rates of learning and the possibility of accelerating learning through out-of-body interactions during sleep with advanced learning specialists.
Celestial Learning emphasizes two critical conditions related to out-of-body learning during sleep:
The study found that students actively engaged in a variety of learning situations experienced a rapid improvement in the rate of learning as well as retention through Celestial Learning. The program, rather than directly inducing our-of-body travel, simply established the pre-sleep conditions conducive to spontaneous travel.
Located on the surface of the cell, they receive messages or signals from the environment or field and communicate that information to the cell which in turn, triggers a corresponding response within the cell.
(pronounced Kelt) - The ethnic group ancestral to the Irish, Scottish, Welsh, Cornish (of Cornwall), Breton, and Manx, and a high percentage of the French, Belgian and Swiss people. Celtic (pronounced either Kel-tik or Sel-tik) and Celtophile are derivatives of this word.
Of or relating to the Celtic people and languages.
Also See: Celt
A ten-card Tarot spread developed in the 1890s by a member or associate of the Golden Dawn. Taught to first-level initiates of the Golden Dawn and widely used after publication by Arthur E. Waite in his Pictorial Key to the Tarot.
Card 1. The main question on your mind Card 2. A challenge within the situation Card 3. The foundation of the question, subconscious issues Card 4. The recent past Card 5. Conscious goals, what you think you can achieve Card 6. The near future Card 6. What you are bringing to the situation Card 8. How others affect the situation Card 9. Your hopes and fears Card 10. The outcome
A cross (sometimes with equal arms, sometimes where the horizontal bar is shorter and above the center of the vertical bar) with a circle around the intersection of the horizontal and vertical bars.
A divination deck that combines Celtic astrology with the Ogham (or Celtic alphabet) and tree symbolism alongside Celtic festivals.
Those Witches who either try to remake ancient Celtic Religion or claim they are a continuation of that ancient religion. Some Celtic Witchcraft traditions include Gardnerian, Alexandrian, Welsh, and Algard.
To spread the odor of incense in an area by carrying or waving the incense itself, by using a fan to disburse the smoke, or by swinging a censer that holds the smoking incense. Also, the passing of a person or object through incense smoke, usually to purify that person or object.
An incense burner. Traditionally, a censer is a metal container, filled with incense, that is swung on the end of a chain.
Censorship in Religion
All organized religions experience a program of codification in which certain practices and literature are approved by a controlling body while other practices, books, myth and lore are disapproved and sometimes officially banned. Initially, much of this had political motivation—to bring the "faithful" under a single roof and to eliminate deviancy and competition to Roman rule in the case of early Christianity.
Until the middle of the 20th century, certain books were listed as "sinful" for Catholics to read, and such banning was not limited to books with sexual themes although that was the common interpretation.
Only in recent years has there been widespread opportunity to explore alternatives in Christianity, and—to a lesser extent—in Islam and Judaism. In some cases, it has been archaeological finds that brought old ideas into new perspectives. For Christians, the primary findings have related to Gnosticism and Neoplatonism.
A term meaning to bring your focus and attention back to the center of yourself. This is useful to eliminate disturbing issues that prevent you from focusing on the purpose of what you are doing: your work, your meditation, or your magick. One method of doing this is by grounding.
In European magical traditions of the nineteenth and twentieth centuries, a method of using the eyes to prevent another magician from gaining control of one’s mind and will. To use the central gaze, the magician focuses his or her eyes intently on a point between the attacker’s eyebrows, at the location of the “third eye” center. The crucial point lies in not allowing oneself to meet the attacker’s gaze directly, even for a moment; concentration must be maintained on the chosen point.
Divination by means of a donkey’s head.
A form of divining the name of a person guilty of a crime. The head of an ass is broiled. When the guilty person’s name is called out, the ass’ jaws move.
In Greek mythology, this daughter of Nyx (the night) and sister to the fates was the goddess of violent death.
Name for a malevolent ghost.
Divination by observing weather phenomena.
A condition where the supply of oxygen to the brain is reduced. It has numerous causes, ranging from choking or smoke inhalation to shock or trauma. In mild cases, symptoms include difficulty in learning tasks and reduced short term memory. If it continues it can cause sensory distortions and hallucinations. This has been pointed to as a cause of some near-death experiences. However, it cannot explain the similarity of some of these experiences among people of all ages and cultures. Complete lack of oxygen going to the brain, known as cerebral anoxia, will cause irreversible brain damage within four to six minutes.
Ceremonial Magick is one of the most complicated systems of spiritual attainment in the world. It is a mixture of Jewish, Christian, and ancient Egyptian philosophy mixed with ancient Indian and Chaldean ideas spiced with a hint of earlier Paganism. This is mixed with the ceremonial aspects of Catholicism and Masonry. It usually heavily involves the study of the Kabbalah, the mysticism of the world put into Jewish and Judeo-Christian terms.
The object of ceremonial magick is to stimulate the senses, to power-up the emotions, and to firmly conceptualize the purpose of the operation—which is to create a transcending experience to unite Personality with the Divine Self. To this end, rituals, symbols, clothing, colors, incenses, sound, dramatic invocations and sacraments are selected in accordance with established "correspondences" of one thing to another to transport the magician towards a mystical reality.
Although Ceres is just one of thousands of known asteroids, in astrology it is often treated as a minor planet. The asteroid is named after the Roman harvest goddess and is a version of the Greek Demeter. Keywords used in astrological interpretation of Ceres include: abandonment, agriculture, anger, compassion, domesticating, ecological, fertility, foster parenting, governess, menopause, mother, nanny, nurturing, animal husbandry, and survival.
A Welsh mother goddess.
The Greek name for ancient Celtic god Uindos, son of Noudons, who is featured in a group of great epic tales and romances called the Fenian cycle. Most famous incarnation is as Finn Mac Cumhail.
Also See: Finn Mac Cumhail, Uindos
Divination by analyzing patterns in melted wax dripped on a flat surface.
Divination with melted wax. Slowly pour the wax into cold water and look for images and symbols as the wax hardens in the water.
An expression composed of two Sanskrit words: chakra, meaning circle, and puja, meaning ritual. Thus, a Chakra Puja is a ritual practiced by a group ("circle") of people. In the West, there are public Chakra Pujas that are open to the public, and more private Chakra Pujas intended for people initated into a particular Tantric tradition.
A chakra is a spinning vortex of energy created within ourselves by the interpenetration of consciousness and the physical body. Through this combination, chakras become centers of activity for the reception, assimilation, and transmission of life energies. Uniting the chakras is what we experience as the "self."
The word chakra comes from the Sanskrit word for "wheel" or "disk" and originated within the philosophy of the ancient yoga systems of India.
Also See: Chakra
Wheel of light or spinning disc. The energetic organ located at various plexuses throughout the body and work to revitalize the physical body and the energy field.
Also See: Chakra
Pronounced “Chak-rah” with a hard “ch” as in the word “chalk” or “Kahk-rah,” these are power centers in the aura related to organs or glands in the body. The chakras are not in the body per se; they are actually whirls, vorteces, circles, or lotuses that psychics can see in the aura.
Also See: Chakra
Sanskrit. "Wheel, Vortex, or Whirlpool." Psychic centers located in the aura functioning through the etheric body that exchange particular energies between the physical body and the personality, and the higher sources of energy associated with the planets, the Solar System, and the cosmos. They are interfaces between Mind and Body.
There are seven traditional "master" chakras and dozens of minor ones located in such places as the palms of the hands, soles of the feet, joints of arms and legs, and just about any place traditionally adorned with jewelry.
Chakras are "vortexes," whirling centers of energy associated with particular areas of the body. In the Hindu tradition, muladhara is located at the base of the spine and is the source of kundalini and the power used in sex magic. Svadhisthna is located at the sacrum. Muladhara and Svadhisthna are linked to the physical body. Manipura is located at the solar plexus. Muladhara, Svadhisthna, and Manipura are together associated as the Personality, and their energies can be projected through the solar plexus in such psychic phenomena as rapping, ectoplasm, and the creation of familiars. Manipura is linked to the lower astral body. Anahata is located at the heart and is associated with group consciousness. Vishuddha is located at the throat and is associated with clairvoyance.
Anahata and Ajna are linked to the higher astral body. Ajna is located at the brow and is associated with clairvoyance. Sahasrara is located at the crown and is associated with spiritual consciousness. Anahata, Vishuddha, and Sahasrara are together associated as the spiritual self.
The master, or major, chakras are as follows. While we are listing some correspondences to planets, colors, and the Kabbalistic sephiroth, there is considerable debate about these and the correlations cannot be specific because the chakras and the sephiroth involve two different systems. Likewise, although not listed, there are differences between both these systems and those of Oriental martial arts and healing systems.
The following chart is a simplification of the primary chakra system:
1 These are the most commonly assigned colors, but authorities differ.
2 These are the most commonly assigned locations, but authorities differ. Instead of the Solar Plexus, Theosophists identify it with the Spleen, others with the navel.
3 Commonly, this is given as two, but it is really two "wings" of 48 each.
4 Most commonly, it is identified as a thousand-petaled lotus. The crown chakra has 960 spokes plus another 12 in its center, which is gleaming white with gold at its core.
5 Between anus and perineum.
6 Again, there are disagreements among authorities. Remember that there is no direct physical connection between the etheric chakras and the physical body.
Wheels of Life: A User'ss Guide to the Chakra System, by Anodea Judith
Also See: Chakra
Disk or wheel. Chakras are energetic centers in the body that can be used as a kind of inner roadmap in Tantric practice. Dr. Mumford defines a chakra as, "a whirling vortex of energy, the meeting point between the body and the mind."
Also See: Chakra
A person living in the area of ancient Babylon (modern Iraq) associated with the Sumerian city of Ur, which was eventually ruled by the Chaldees (the biblical “Ur of the Chaldees”). Chaldea was associated with magic, and by the seventeenth century in Europe, any person who was an astrologer, diviner, or magician was generically known as a “Chaldean.”
A small book of ideas and philosophical concepts supposedly revealing the Chaldean mysteries. The authorship is attributed to Zoroaster, but this is questionable.
A large goblet, frequently stemmed, used to hold wine in religious rituals. Also used to represent the element of Water in magickal rites.
Chalice of Power
(or Camael, Camiel, Kemuel) (“He Who Sees God”) The head of the Choir of Dominions and is one of the seven great archangels. He can be called upon for any matters involving tolerance, understanding, forgiveness, and love. Chamuel is also one of the ten Kabbalistic archangels. He rights wrongs, soothes troubled minds, and provides justice. Chamuel is Ruler of Mars. You should call on Chamuel whenever you need additional strength, or are in conflict with someone else. Chamuel provides courage, persistence, and determination.
The occurrence and development of events without any obvious design; random, unpredictable influences on events that cannot be anticipated by normal means.
Sanskrit name of the Moon.
Literally the “point of the moon;” however, in the Tantric “twilight language,” it means “moon juice.” The moon is symbolic of the spiritualized woman. The moon juice is the lubricating fluid created by a woman in preparation for spiritual sexual intercourse with her partner (maithuna).
In Tibetan spirituality, a ritual bell.
There is always change. In Self-Empowerment, instead of letting life happen to you, you can make life happen your way. But you have "to take charge” and "state your intentions." There is recent scientific evidence showing that change in your own thoughts make changes in your brain’s circuitry.
In Celtic and Irish lore, a fairy given to a family in exchange for a human child.
A person who allows a spirit to speak through them. This involves the (usually) willing participation of the channeler who temporarily allows a spirit to “possess” their body. A spirit medium may or may not be a channeler. Also known as a “trance medium” (and incorrectly a “transmedium”).
Also See: trance medium, transmedium
A technique of communicating with a non-physical entity. The channeler or trance medium allows a non-physical entity (usual a spirit) to inhabit or take over their body to they can more easily share information, guidance, etc.
Receiving information from a discarnate entity or a higher spiritual being. It may also refer to communication with an aspect of one’s own subconscious mind. It is similar to, but not necessarily the same as, the spirit communication of mediumship. In both, however, one person serves as bridge between a spirit or spiritual intelligence and people of ordinary consciousness. In spirit communication, the medium is more often unaware of the communication; in channeling of spiritual intelligence, the channeler is more often aware and sometimes a participant.
Automatic Writing is a form of Channeling in which a person, sometimes in trance, writes or even keyboards messages generally believed to originate with spiritual beings, or with aspects of the subconscious mind.
Suggested Reading –
The process of repeating sounds, words, or phrases. This can be done melodically or simply rhythmically. The goal is to induce an altered state of consciousness that may be used for magickal purposes.
The repletion of words or short phrases in a vibrating voice that stirs psychic energy and may induce trance.
Suggested Reading: Andrews, Ted: Sacred Sounds: Magic & Healing Through Words & Music
The act of vocalizing and usually repeating a chant.
(Pronounced "kay-oh-ist) Term for a practitioner of Chaos magick. Such a person could also be referred to as a "Chaote."
Divination by things seen in the air. Also known as Aeromancy.
The disorganized, primal, state from which creation emerges. Chaos Theory is a field of mathematics with applications in various sciences, including physics and magick, concerning the level at which small changes can produce major, generally unpredictable, results—the so-called “Butterfly Effect.” In Quantum Theory and in Magickal Theory, intention applied at sub-atomic levels can create desired change.
A term coined by Peter Carroll in 1978 in his book Liber Null (published in the US in combination with another work as Liber Null and Psychonaut) to describe a system of magick that its practitioners (who usually call themselves Chaoists or Chaotes) consider to be radically different from previous forms of magick. Important concepts in this tradition include the power of belief (expressed as “fake it until you make it”), the Gnostic state (being extremely focused) as necessary for most magick, and extreme eclecticism including the use of any belief system (even ones that are known to be entirely fictional) for the working of magickal rituals. Carroll, with Ray Sherwin, founded the Magical Pact of the Illuminates of Thanateros (IOT), which is an important force in the very loosely organized Chaos Magic movement.
(Pronounced “kay-oat”) A term that describes a person who practices Chaos magick.
Divination to determine a person’s character.
The process of infusing some object with magickal power for a specific purpose. For example, a ring, pendant, or belt can be charged to protect the person who wears them.
Charge of the Goddess
An inspirational and instructional recitation and instruction repeated by the High Priestess manifesting the Goddess in some Pagan rituals. It was derived from sources both ancient and modern, especially from the works of Aleister Crowley and the book Aradia, or the Gospel of the Witches by Charles Godfrey Leland. The earliest version was compiled by Gerald Gardner, arguable the founder of modern Wicca, but the more popular versions, both in prose and verse, were created from Gardner’s work by Doreen Valiente. Several variations have appeared over the years.
To infuse an object with magical power.
A large cup (or small bowl) that contains sanctified (or exorcised) water. Usually located on the main Altar. It is carried around an area to asperge the Circle and coven members. Also called the water bowl.
One of the trumps of the Major Arcana of the tarot. Numbered VII. In the system of Eliphas Levi, it corresponds to the Hebrew letter Zayin. In the system of the Golden Dawn, The Chariot corresponds to the Hebrew letter Cheth and the astrological sign of Cancer.
Qabalistic Description. The 7th Key, the 18th Path. Child of the Powers of the Waters, Lord of the Triumph of Light. This Path connects Binah (Understanding) to Geburah (Severity), the descent of Spirit into the lower manifest universe. The Chariot, having conquered the lower planes, is the first path to cross the Abyss from the lower Sephiroth. Cheth means a fence or enclosure, which is the Chariot itself enabling its driver, the Higher Self, to rise above limitation through the balance of opposites.
Card Description. Here we have a symbol of the spirit of man controlling the lower principles, soul and body, and thus passing triumphantly through the astral plane, rising above the clouds of illusion and penetrating to the higher spheres.
The colors amber, silver-grey, blue-grey, and the deep blue violet of the night sky elucidate this symbol. It is the sublimation of the Psyche.
Interpretation. Triumph, victory, health. Success, although sometimes not stable and enduring.
Psychological Value This Arcanum may be used to master episodes of manic delusions, and uncontrollable fantasies. It helps you find a balanced, centered path, so that you may discover a personal expression for your spirituality. Thanks to this Arcanum, you can work on every aspect of your unconscious, and bring to light those elements that have been buried deep inside. It fosters gentleness and benevolence. This Arcanum is useful for assisting the Magus who wants to make spells, and it can also be used to resolve fantasies that get out of control. (According the Divine Tarot of Aurum Solis - Divinity: Poseidon; Greek Letter: Psi.)
The young charioteer is in command of his physical and emotional drives, symbolized by the two opposing forces that pull the Chariot.
(1874–1932) Writer and novelist who achieved fame for collecting reports of paranormal phenomena in works such as The Book of the Damned.
(1850—1935) Famed French physiologist who won the Nobel Prize in 1913 for his work on understanding acute allergic reactions known as anaphylaxis. Richet also studied Spiritualism, investigated mediums, and became the president of the UK's famed Society for Psychical Research (in 1905). He believed that there was a physical explanation for mediumistic phenomena, rather than adhering to the idea that the phenomena were caused by spirits. He believed that some of the phenomena were caused by the medium's "sixth sense" (the ability to perceive non-physical "vibrations"), not spirits. Other phenomena, he believed, were caused by the medium projecting a material substance from their bodies. He called this substance "ectoplasm," a term that is still widely used.
An object imbued with some type of energy for magical purposes. See “talisman.”
Term that comes from the same root as charisma and chant. A magick spell, particularly one that makes you more attractive to another person as in, “She charmed me.” Also, an object that has been magickally charged. Another term for a talisman or amulet.
Divination by interpreting writing found on papers.
In ancient Greek mythology, Charybdis was a beautiful naiad, the daughter of Poseidon and Gaia. She sided with her father against Zeus who turned her into a horrible sea monster. Later she was seen not as a monster per se, but as the goddess of terrifying oceanic whirlpools.
Kabbalist (Turkish 16th–17th Century). A leading disciple of Isaac Luria, Vital experienced many fantastic visions and personal revelations. Elijah and other righteous men of the past appeared to him. He performed healings, exorcisms, dowsing for water, and at one point declared himself the Messiah. He also believed he had undergone multiple incarnations and, for example, had the soul of R[abbi] Akiba.
Check and Verify
A vital procedure in all astral training because astral consciousness includes the faculties of imagination and dreaming and untrained they lead to fantasy and even delusion. Variations of the phrase include “test and verify” and “trust, but verify.”
A Sanskrit term generally accepted to mean "student." It originally meant "servant," indicating the idea that a student is meant to serve his or her teacher in order to pay for (exchange energy for) the teacher's spiritual training.
The Greek Hera was sometimes seen as a triple goddess. Chera is her third aspect, the old wise
Singular form of “cherubim.” From the Hebrew “kerub.” A generic term for a celestial being, often shown in art as being a winged, chubby, young child.
The second-highest rank of angels in Dionysius’ hierarchy. They are God’s record keepers and reflect his wisdom and divine intelligence. They pay careful attention to all the details.
The fourth Sephirah of the cabalistic Tree of Life, the middle Sephirah on the Pillar of Mercy. The term is a Hebrew word meaning "Mercy." It represents the archetype of the number 4, the merciful aspect of the Godhead. It corresponds to the divine name El, the archangel Tzadqiel, the angelic choir called Chashmalim, and Tzedek, or heavenly Sphere of Jupiter.
In Chokmah is the Radix of blue and thence is there a blue colour pure and primitive, and glistening with a spiritual Light which is reflected unto Chesed. And the Sphere of its Operation is called Tzedek or justice and it fashioneth the images of material things, bestowing peace and mercy; and it ruleth the sphere of the action of the planet Jupiter. And Al is the title of a God strong and mighty, ruling in Glory, Magnificence and Grace. And the Archangel of Chesed is Tzadkiel, the prince of Mercy and Beneficence, and the Name of the Order of Angels is Chashmalim Brilliant Ones, who are also called the Order of Dominions or Dominations. The Sephira Chesed is also called Gedulah or Magnificence and Glory.
The eighth letter of the Hebrew alphabet, Ch or H. Represents the number 8. The fourth of the twelve "single letters." A Hebrew word meaning "fence" or "enclosure." Corresponds to Cancer, the 18th Path (between Binah and Geburah), and Tarot trump VII The Chariot.
A Sanskrit term meaning a "copy" or "shadow," it is the astral image or astral body of a person.
This energy consists of static electricity, infrasound, infrared radiation, and magnetic fields. Chi is a complex form of energy that manifests itself in your vitality, your spirit, and your life.
Vital life force often referred to as energy. A term commonly used in acupuncture referring to the energetic flow within the meridians.
Energy or life force. Also known as Ki or Prana.
The Chinese name for vital energy, comparable to the Hindu Prana.
The Qabalistic Part of the Soul that represents the creative impulse and divine will. The Chiah is attributed to Chokmah.
Part of the soul located in Chokmah. It is Divine Will, the source of action.
Also described as Oriental or Asian Medicine, Chinese Therapies is a term used to describe a wide assortment of practices for sound body, mind, and spirit. These include herbal remedies, acupuncture, acupressure and massage, energywork such as Chi Gung, and martial arts.
The lines of the hand are what give palmists the most material for interpretation and analysis. The interpretation of the palm lines is called chiromancy.
Chiron is an asteroid traveling in orbit between Saturn and Uranus. Although research on its effect on natal charts is not yet complete, it is believed to represent a key or doorway, healing, ecology, and a bridge between traditional and modern methods.
In your birth chart, Chiron represents your deeper sense of purpose, and adds a subtle yet powerful drive to achieve a connection to higher values. Chiron may also indicate areas of vulnerability or emotional wounding which need special attention. Authorities have not yet determined Chiron’s influence is so well-established that many astrologers include it along with the Sun, Moon, and planets in their basic natal chart analyses.
A Sanskrit word referring to an energy path that is tinier than the more famous paths known as nadis. Chitrinis are said to actually be inside of the nadis.
Literally a “young green shoot,” it is a title or name of the goddess Demeter.
In Tibetan spirituality Chöd is the name for a rite of magical self-sacrifice.
The second Sephirah of the cabalistic Tree of Life, the topmost Sephirah on the Pillar of Mercy. The term is a Hebrew word meaning "Widsom." It represents the archetype of the number 2, the masculine aspect of the Godhead. It corresponds to the divine name Yah, the archangel Raziel, the angelic choir called Ophanim (Wheels) and the Mazloth, or heavenly Sphere of the Zodiac.
In Chokmah is a cloud-like grey which containeth various colours and is mixed with them, like a transparent pearl-hued mist, yet radiating withal, as if behind it there was a brilliant glory. And the Sphere of its influence is in Masloth, the Starry Heaven, wherein it disposeth the forms of things. And Yah is the Divine Ideal Wisdom, and its Archangel is Ratziel, the Prince or Princes of the knowledge of hidden and concealed things, and the name of its Order of Angels is Auphanim, the Wheels or the Whirling Forces which are also called the Order of Kerubim.
Choronzon's number is 9, also the number of Man. But Choronzon is also a "demon" within the Enochian writings of Dr. John Dee, likewise within Crowley’s system where Choronzon is "the Dweller in the Abyss," believed to be the obstacle between the adept and enlightenment. But Choronzon is also the name of the demon that guards the Abyss on the Tree of Life—separating the lower from the higher. It’s that Abyss that we must cross to fulfill our spiritual destiny. That demon is the obstacle between the adept and enlightenment. If met with proper preparation by the magician, his function is to destroy the ego, allowing the magician to cross the Abyss. We all must confront our demon.
The demon is also our Shadow, the lower self of the subconsciousness with our fears and repressions.
Also known as the C∴C∴, it appears to have been a magical group active in Chicago as early as 1931 and at least as recently as 1979. Exactly what it was, or is, is confusing and probably of no pertinence to our study here. According to the occult scholar P. R. Koenig, in 1933 a small group of homosexual men split off from C. F. Russell's original group in order to practice Crowley’s XI°. It was led in recent history by Michael P. Bertiaux teaching Haitian Voodoo and O∴T∴O∴ magic.
Unfortunately, the study of Western magical philosophy is often obscured by the number of secret orders cast on Masonic models that claim to teach true magic. At least in some instances these are successful business operations and in some other cases provide opportunities to indulge the vanities of members who adore dressing in expensive robes and addressing each other by their secret names. Most of their magical teachings of value were derived from the serious work of the Hermetic Order of the Golden Dawn and the Aurum Solis. These teachings were long ago made available in book form. Experience demonstrates that the study and practice of magic is as suitable to the solitary person as to group membership.
A religious tradition based on the book Science & Health with Key to Scriptures by Mary Baker Eddy (derived from the works of Phineas P. Quimby) and the Bible. Although it has a full theology, most people are familiar with it through their belief that illness is the result of fear, ignorance, or sin, and if you eliminate those factors through prayer, the illness will cease. Followers of Christian Science tend to prefer this healing system before the use of drugs or surgery, but unlike some religious sects, it is not required.
Greek for "Christ," meaning the anointed one. Similar to the Hebrew translated as "Messiah." Mystically, we each have a divine inner core (sometimes called "Christ Immanent") that we can manifest.
A time traveler.
A golden yellow variety of peridot. It supposedly can help prevent fever and madness.
Although having a name similar to Chrysolite, Chrysoprase is completely different. It is a form of chalcedony (considered by some to be the most valuable form of this stone), a member of the quartz family. It is generally a translucent bluish-green or apple-green. It is claimed to improve vision and bring joy.
Divination by looking into a reflective or transparent object such as a crystal or various shapes of glass. Although crystal balls are now popular for this, in the past a piece of beryl, in reddish or pale green shades, was often used.
From the Greek chthon, earth. Refers to spirits or deities of the underworld or the souls of the dead.
Deities, spirits, or anything connected or related to the Underworld. It is derived from the Greek word khthonios which means “of the earth.” Some of the oldest beliefs retained within Wicca/Witchcraft originate from the Neolithic period during which we find many chthonic elements apparent in the primitive religions of this era.
A consecrated Witch’s cord, traditionally either nine feet long or based on measurements of the Witch’s body. It can be used to lay out a circle of ritual and may be worn around the waist to represent the initiation level of the Witch. It may also have knots made in it at certain points, again representing the measure of the Witch and/or to hold magical power.
A collection of loose papers that are the original source of the Golden Dawn ritual and magical system, and which played a complex and still uncertain role in the founding of the Hermetic Order of the Golden Dawn.
A code set up by a person that he or she intends to use to prove that they are actually communicating after their death.
Natural cycles of arousal and sleep.
The magic circle is drawn in the astral world about the Magus and the place where the ritual is worked. It forms a division between the magical place and the ordinary world, setting the interior space apart. This allows the region inside the circle to hold a heightened charge of magical potency, and because it is a pure space devoted to worship and magic, it permits the manifestation of spiritual Intelligences that could not be readily perceived in the ordinary environment. The circle also acts as a barrier that protects the Magus from the intrusion of discordant, chaotic forces that seek to disrupt communications with higher spiritual beings, or even to harm the Magus in emotional and physical ways.
The circle is always inscribed from the inside, ideally from the center, in a sunwise direction, and visualized as a glowing or flaming band of light that sustains itself in the air at the level of the heart. Often a corresponding physical circle of the same radius is marked on the floor of the chamber beforehand; but the magic circle does not actually exist until it is made in the astral by a deliberate act of will. For convenience, the circle is made of a size great enough to enclose the ritual place. A single ritualist, if working without an altar in a confined space, might project a circle of six feet in diameter. With an altar at the center, the circle might be nine feet in diameter to permit movement around the altar. Since the circle is drawn in the astral, it can be made larger than the actual physical chamber.
Whatever its size, the circle should always be large enough to comfortably hold all who work within it. Because the circle is magically real, even though immaterial, it must never be casually broken. It is extended from the heart center of the Magus clockwise from the point of the right index finger, or the point of the wand, sword, or knife. It should be reabsorbed at the end of the ritual in through the left index finger, or magical instrument held in the left hand, by retracing it widdershins——against the course of the Sun. It must never be stepped through, although this is a common mistake among occultists. To disregard the substantiality of the circle is to weaken it, and so render it a less useful tool.
Another name for a group of people having a séance.
A temporary boundary within which a séance or magical operation may take place. The theory is that is becomes a kind of psychic container for the energies used in the operation and a barrier to unwanted energies from outside.
The Magick Circle—whether drawn physically or in the imagination—is the "container" of magickal operations. The "Opening" and the "Closing" of the Temple—or of the Circle—is an operation that is both magickal and psychological. The rituals of Opening and Closing are various but all are simple projections of energy guided by will power with the express intent to provide a barrier against exterior forces while establishing the Circle (or Lodge) as container of the magickal energies
Circle of Bluestone
Small ring carved from Preseli Bluestone, the substance from which the Inner Ring of Stonehenge is made. It is used with Touchstones on the equinoxes and solstices to unify talismans—combining their power with all other talismans.
Circle of Protection
A magickal circle wherein magick workers are protected from unwanted entities and energies.
Circle of Protection
A circle of people, often holding hands, at a séance.
Circle of Protection
Part of the popular game Magic: The Gathering, which protects you from certain types of damage.
A concept based on the idea of an electrical circuit, which must be complete for a device on the circuit to function. That is, there must be a source of electrical energy, the device to be powered, and then a ground, usually associated with or near the source. This is translated to psychic or magickal energy, where some hold that there must also be a completed circuit to be effective. Thus, when you are sending healing energy to someone, you are the source, the person to be healed is the “device,” and then you receive energy back from that person which needs to be “grounded.” If you block the return—that is, if the circuit is incomplete and you get nothing back—the healing will not be effective. If you do not ground the energy you receive, the working may be effective, but you become the ground, resulting in the absorption, by you, of the negative charge on the energy from the person you were healing. Healers who do not know how to “ground” the energy often end up exhibiting the symptoms of the person they heal. See Also Grounding.
Circulation of the Body of Light
A ritual developed from the Middle Pillar Ritual wherein you move spiritual energy throughout and around your body.
To walk in a circle. In temples of the Hermetic Order of the Golden Dawn, this was a clockwise circling of the temple. It was said to represent the rise of Light. In the Golden Dawn’s inner order, the circumambulation of the temple by initiates would create energy in the form of a cone or vortex. This may be the source of the Pagan/Wiccan concept of a “Cone of Power,” although the influence may have gone the other way. A reverse circumambulation, going anti-clockwise, was symbolic of the fading Light.
A term most often used by ceremonial magicians describing moving within a magickal circle either as part of a ritual or as a way of raising magickal energy.
A person whose gender, as defined at birth, anatomy, and self-identification all match. Compare with transgendered.
City of Pyramids
The destination of the Adept crossing the Abyss. The City of Pyramids is located in Binah.
One of several psychic abilities, this is the description of supranormal auditory talents, allowing a person to hear beyond the normal purview of the ears. Sometimes used to hear a message from a spirit.
The psychic ability to hear things inaudible to most people, such as the voices of spirits, sometimes sounds of inanimate objects such as crystals, minerals, artifacts, etc.
One of several psychic abilities, this is a description of general supranormal feelings. That is, a person with this ability can sense or feel something beyond the normal purview of the physical senses. Sometimes described as a “knowing” of some information.
A person with the psychic ability of being able to sense, feel or know information (including knowledge, smells, tastes, etc.) without the use of the physical senses.
One of several psychic abilities, this is the description of supranormal visual talents, allowing a person to see beyond the normal purview of the eyes. Such visual ability may allow a person to see past or future events or present events which were not physically observed. Also the ability to see non-physical things such as spirits.
Sometimes called “ESP,” the psychic ability to perceive things invisible to most people, such as auras, various health indicators, spirits, as well as things at a distance in space or time. It includes skrying.
"As part of the unfoldment of the human intellect into omniscience, the development occurs at a certain stage of human evolution of fully-conscious, positive clairvoyance. This implies an extension, which can be hastened by means of self-training, of the normal range of visual response to include both physical rays beyond the violet and, beyond them again, the light of the superphysical worlds…It is important to differentiate between the passive psychism of the medium, and even the extrasensory perception (ESP of parapsychology), and the positive clairvoyance of the student of occultism. This later, completely under the control of the will and used in full waking consciousness, is the instrument of research…to enter and explore the Kingdom of the Gods."
Hodson, Geoffrey: The Kingdom of the Gods, 1953, Theosophical Pub. House, Madras, India. (Page 9)
Suggested Reading –
Slate & Weschcke: Psychic Empowerment for Everyone
Although generally used to mean any sort of psychic ability to know, see, hear things from afar, in the future or past, on the spiritual planes, etc., among Spiritualists it is specifically used to mean some form of psychic sight. It can occur to a person either as subjective clairvoyance or objective clairvoyance, and may be challenging for a person having this experience to discriminate between the two. Further, the difference between the two can be viewed as a continuum rather than as an either/or type of experience.
A person with the power of clairvoyance. A mental medium.
(“Clear-seeing”) A person who can obtain information through “seeing” things that others do not.
A person who obtains information through “seeing” things, usually via spirit communications, that others do not.
A rare phenomenon in which clairvoyant information is revealed in reverse form. Examples are reversed numbers and spellings.
An amulet made from an animal’s claw provides the wearer with protection according to the strength of the animal. A tiger’s claw amulet will be more powerful than one made from a badger’s claw, for instance. A bear-claw amulet is believed to help women during childbirth.
A term used to describe the removal of forms of negativity. This can be done spiritually (as in the performance of a banishing ritual) or physically (as in covering an area with salt and then sweeping the purifying salt away).
The process of driving evil or negativity from a person, thing, or area. Witches and magickal practitioners clear an area before performing magick. Similar to a banishing, but often does not use a formal ritual.
Clearing a Bad Psychic Atmosphere
There are many advanced magical techniques, but there are also simple folk practices. Here are a few:
Divination by observing the significance of random things people say.
Divination by observing a string-suspended key, similar to the pendulum.
Divination by means of a key suspended from a thread held between thumb and forefinger—actually a pendulum—lowered into a glass jar. After a question is asked, the key will knock against the glass—one knock for yes, two for no.
A beautiful Greecian water goddess who was the daughter of the river god, Asopos, and the river goddess Melope.
Divination by throwing runes, dice, or something similar.
Divination by the casting of lots, also called sortilege, or throwing dice. Actually the common flipping of a coin is a simple variation. An early form of dice was made from actual knucklebones of certain animals. Later forms were made of clay, ivory wood, and plastic. A single dye has six faces, and each face of the dye has from one to six dots. In dice divination, three dice of the same size are shaken between your hands or in a cup while thinking of your question or problem, and the number of dots shown on the dice landing face up is interpreted.
The term “clinical,” often seen in advertising, simply means something has been done with people rather than by other means, such as in a test tube. Clinical hypnosis (or clinical hypnotherapy) generally refers to the practice of hypnosis and suggestion to help people change or eliminate unwanted behaviors. Clinical hypnosis is usually performed in an office (a “clinic”) designed for this function.
In UFO lore, contact between human and alien beings. There is much to suggest that these may not be physical beings but perhaps projections of the Unconscious.
Others have proposed further types of close encounters:
A set of cards used in ESP testing such as Zener cards. A subject tries to determine or “guesses” the symbol on card being viewed by a tester or predicts the next card and each card appears a fixed number of times. Statistical analysis of tests run using a closed deck uses different protocols than testing using an open deck. (See “Open Deck.”)
The end of a ritual. At the beginning or opening, the magick circle was formed. During the closing, the circle is taken down, returning the space to its normal level of sacredness. If people have dedicated temple space, they may choose to close or end the ritual but allow the sacred space to remain as charged by the ritual.
Clothing for Rituals
The G∴B∴G∴ calls for simplicity. The requirement is only that the ritual clothing be different from one’s customary clothing. While fancy ceremonial robes would fulfill this requirement, they would be contradictory to the G∴B∴G∴ emphasis on the Great Work in contrast to the accessories. A simple garment cut from cloth with holes for neck and arms could be sufficient, as could an inexpensive bath robe or night shirt.
In some Wiccan circles the choice is "skyclad—nakedness—so that all are equal before the gods and nothing is in the way of the body’s natural energy. The observation can be made that nakedness does not make us equal since we are very conscious of our physical appearance—slim, fat, tall, sort, hairy, etc.—a distraction from the Magick.
During the process of divination using a crystal ball, before visions appear the interior of the crystal becomes hazy, misty, and unclear. This is known as “clouding.”
Here's the realization: we have all along been unconscious co-creators, and now we have the growing realization that we must be conscious co-creators, broadly aware of our own transgressions of natural law, or else the human experiment will end in failure.
Louis T. Culling wrote: "Let the cynic or quibbler, who would think that this is an avoidance of realism, a kidding of one's self, practice the opposite of these trances! He would then get a well-deserved dose of his own medicine." What we build in our consciousness shapes outer reality.
Culling, Louis T. & Weschcke, Carl Llewellyn: The Complete Magick Curriculum of the Secret Order G∴B∴G∴, Llewellyn, 2010.
A cluster of nerves residing at the base of the spine.
The tailbone. The coccyx is comprised of the three to five vertebrae below the sacrum, at the very base of the spine.
The name, meaning "River of Wailing," of one of the five rivers in Greek mythology that surround Hades. The other four rivers are the Acheron ("River of Woe"), the Lethe ("River of Forgetfulness"), the Pyriphlegethon ("The Fiery River"), and the Styx ("The Hateful River").
In Postmodern magick, a set of filters through which we deal with any aspect of life. We have a set of codes for money, health, sex, learning, etc. The group of codes that allows us to live in the “real world” is called a “semiotic web.” Magick allows us to change our codes.
Coffee Grounds Reading
A variation of Tasseography (also known as Tasseomancy, and sometimes Tassology), which encompasses Tea Leaf Reading. While the "technology" is somewhat different than for tea leaves, the basic principles are similar.
Cognitive Functions Perspective
Inducing physical relaxation through intervention into the mental functions related to relaxation. Common examples are the use of visualization and suggestion to induce a peaceful, relaxed state.
(CBT) Any number of processes including hypnotherapy, in which cognitive therapy (psychotherapy) and behavioral therapy are combined. The objective is that by making positive changes in the emotional state (feelings, beliefs, etc.), it will generate positive behavioral outcomes.
A seeming simultaneous happening.
A technique used primarily by fake psychics and false mediums to deceive people. The basic principle of cold reading is that people can be pumped for information and then made to believe that this data was received supernaturally. This is combined with people’s almost egocentric tendency to find serious meaning in general statements. Most people help the “miracle” along by tending to read deeper into something than is justified.
The ninth letter of the Ogham tree alphabet, representing the letter C and meaning "hazel."
Hazel is associated with salmon in Celtic lore.... Coll represents creativity, poetry, divination, and mediation.
An apparition seen by more than one person simultaneously.
The combining of clairvoyant faculties of two or more persons to gather clairvoyant information.
In the writings of psychologist Carl Jung and his followers, the deepest stratum of the unconscious that contains material relating not to the individual but to humanity as a whole. The most important presences in the collective unconscious are the archetypes. According to Jungian theory, these are reflections of primal instincts, and also the patterns on which gods and other mythic entities are based. Contacting the archetypal patterns of the collective unconscious in a conscious and balanced way is an important part of the process of individual, the goal of Jungian psychological work.
A concept from the theories of psychologist Carl Jung, the function of the Personal Consciousness that bridges to the collective racial, cultural, mythic, even planetary memories and the world of archetypes of the Universal Consciousness, making them available to the Psyche mainly through the Sub-Conscious Mind.
The memories of all of humanity, perhaps of more than human, and inclusive of the archetypes. The contents of the collective unconscious seem to progress from individual memories to universal memories as the person grows in his or her spiritual development and integration of the whole being. There is some suggestion that this progression also moves from individual memories through various groups or small collectives—family, tribe, race, and nation—so the character of each level is reflected in consciousness until the individual progresses to join in group consciousness with all humanity. This would seem to account for some of the variations of the universal archetypes each person encounters in life.
The name for the experience of several people simultaneously observing a visual image such as an apparition.
Color Ray of Influence
Every mineral reflects some portion of the natural color spectrum. These reflections connect the crystal energy and earth power of the mineral to the human mind. These are the Color Rays of Influence.
The idea that colors have certain practical or mystical meanings. This concept is often used in candle magick and stone magick. It can also be used in artistic works where the artist wishes to communicate an idea without using words.
The idea that colors can have an effect on healing. Techniques can include visualization, breathwork, colored lights, etc. Different systems have either very simple and generalized ideas of how colors have an effect (i.e., red increases energy) to certain shades of color having very specific effects on organs, organic systems, or illness.
Color Wheel of Influence
Shows the cycles of human life and the cycles of the Earth combined within the color spectrum. It matches human activities, desires, and life events to the cycles of the Earth and depicts the appropriate color ray of influence for a talisman.
Composite Strategy for Telepathy
A two-component strategy for activating telepathic sending and receiving.
A technique used in hypnosis that consists of repeating suggestions while a person is in trance. Compounding may be done by using the same words or in different ways. The goal of compounding is to encourage the acceptance of the suggestions.
A cloth or similar material soaked in some form of liquid, often infused with herbs or other medicines, and applied to the body for healing. In its simplest form it is be used to apply pressure, heat, or cold to an affected area or wound, as when stopping bleeding.
A desire to do something that is so strong most people cannot prevent themselves from doing it. Even if it is against their conscious desires, they will still do the action. Usually requires outside assistance to overcome.
Cone of Power
Expression primarily used in Pagan traditions signifying a visualized cone with the point up and the edge of the cone matching the ritual circle used for containing magickal energy that is raised in a ritual before being directed to its goal. May be used by an individual or group. In the Hermetic Order of the Golden Dawn this was described as a “vortex of energy” and was built up via circumambulation.
Cone of Power
Also known as a "silver cone." A cone is commonly visualized as an extension of the Magick Circle to function as a container of inner strength and purpose and as a barrier against external disturbance. In Witchcraft, it is used to reach to the target of the magical operation.
The natural ability of the mind to create imagined situations—false memories—to fill in gaps in the real memory that have been caused by physical trauma to the brain or psychic trauma to the mind. If a person is hypnotized and the hypnotist is not trained in procedures to help regain objective memories (known as forensic hypnosis), the subject may create false memories to fill in gaps or simply to please the hypnotist. False memories that were confabulated during hypnosis sessions led by non-professional hypnotists were one of the causes of the so-called “Satanic Panic” of the last quarter of the 20th century.
A hypnotic induction where the hypnotist tells confusing stories that are impossible to logically follow. Eventually, the person being hypnotized gives up trying to follow them and simply listens to them. This can be seen by certain physiological signs. At this point the hypnotist suggests relaxation and hypnotic trance, and the person follows the directions.
A term used by the G∴B∴G∴ to describe intercourse used in sex magick.
In geomantic divination, a figure of two dots above one dot above another single dot above two dots. The word itself is Latin for “conjunction.” In a divination it is rather neutral, amplifying other factors of the reading.
(0 degrees)–Planets that are together in the zodiac. Indicates prominence of the two energies.
Conjunction occurs when bodies are 0 degrees apart. The influence of the conjunction is variable, depending upon the nature of the two bodies or points involved. It indicates a unification and intensification of planetary energies.
Originally a practice that may have included such things as chanting and physical motions with the purpose of evoking a spirit. Now often used as a generic term for “magic.”
The exact definition of this term is “to make something appear as if by magic.” Some practitioners of natural magick refer to their practices as conjuring. Today, the term is mostly used to mean using tricks to imitate real magic. It was popularized in England where, in order to make sure performers would not be mistaken for real magicians during a time when the practice of real magic was against the law, many entertaining performers referred to themselves as jugglers or conjurors. Today it is common to use “magic” to indicate performers and “magick” to indicate the creation of willed changes through uncommon methods.
That half of human consciousness that operates during waking hours.
That part of the mind that gives us awareness of ourselves and things outside of ourselves.
Consciousness During Ritual
Within a ritual, you are in a state of light trance, and you do not want anything to startle or jar you back into full objective consciousness until the ritual is over. The Ritual must work up without interruption to an absorbed seeking of inspiration from the Holy Guardian Angel.
Acts and/or words used to make a person, place, or thing sacred. Once a consecration is complete, the person, item, or place so consecrated is ready to take part in a magical ritual.
The process of dedicating a person, place, or thing to a spiritual path or entity, usually a deity. Often follows a cleansing (or banishing or purification).
The philosophical theory that concepts of what are and are not real must be agreed upon (not by a test or statement, but by “consensus,” a general agreement) by a group or culture. That is, if enough people believe some concept is “real,” whether it is or not, the cultural view is that it is real. An example of this is the concept that at one time it was believed by most Western Europeans that the Earth was flat. Maps and expeditions were made based on this belief. In reality, the Earth is not flat. In fact, the belief that most people during the Middle Ages and before believed that the Earth was flat may, in itself, be a form of consensus reality. There is now a belief that educated and influential people knew the Earth was spherical, and the wide spread popularity of the flat Earth myth actually developed only in the 19th century due to the popularity of Washington Irving’s book, The Life and Voyages of Christopher Columbus, a novel that claimed most thought the world was flat until Columbus.
The concept that the physical world, as we perceive it, is an illusion—an illusion so powerful that it we accept it as “real” and in our endeavors to understand this reality we have given it still greater “hardness” through our accumulation of history, myth, fable, and statements of physical laws. All of this has been limited to the perceptions of physical senses and rejection of the non-physical.
Part of a religious ritual where the wafer and wine metaphorically become the body and blood of Jesus. Usually found as part of some Protestant rites. In the Roman Catholic Mass ritual, believers feel the wafer and wine become the actual body and blood of Jesus. This is known as transubstantiation.
Contact Mind Reading
A technique, traditionally used in the beginning of training for the development of telepathy, where one person reads the mind of another while lightly touching the other person (usually holding the hand or an arm). Also known as muscle reading or Hellstromism, after the performer Hellstrom who demonstrated this in public shows. To develop true mind reading, first learn contact mind reading, then, instead of touching the person, connect with the person through a hard objects such as a yardstick or broom handle. Once that is perfected, replace the hard object with a soft one such as a handkerchief. When that is perfected, eliminated the physical contact, but look at the person’s eyes. Finally, perform tests without looking at the person.
A generalized term for non-physical forces or entities with whom communication of some sort is established. Often, these contacts will be the first communication a person has when going into trance. Contacts may have information or guidance for the person communicating with them.
Focusing your attention on something. The second step in true meditation.
Continuum of Attentiveness
A phenomenon characterizing the liberated discarnate state, in which command of past experiences exists on a need-to-know basis.
Continuum of Awareness
A phenomenon characterizing the typical incarnate state of constricted memory.
A group of random people who are given a test, the results of which give a baseline range or “norms” as part of clinical experimental protocols. A subject would then be tested and the results of his or her tests would be compared to the norms.
Meditation exercise designed to shape the future by generating images of desired developments or outcomes.
After the expulsion of the Jews from Spain in 1492, those Jews who remained in Spain and pretended to convert to Christianity but still practiced Judaism—and often mystical Judaism in the form of the Kabalah—in secret.
An odd suggestion given by a hypnotist to a person the hypnotist is trying to lead into trance to see if that person will follow the suggestion, thus proving to the hypnotist that the person is actually in trance. Later, if the person doubts he or she was hypnotized, the hypnotist can remind him or her of how they followed the suggestion.
A derogatory term applied to a Solitary Witch or a self-formed coven of Wiccans who base their magick not on thorough training, but on simply following any popular book of spells. They may simply collect the materials for the spell that are listed in the book and then follow the instructions, going so far as to read the book during the casting of the spell. This is similar to following a recipe from a cookbook, which is the source of this term.
Abbreviation for “Circle of Protection.”
A sweet gum excreted by certain trees used as an incense by some Pagans and ceremonial magicians.
A rope or string used as a badge of rank in some Wiccan traditions. The color of the cord indicates the Degree of the Witch within the tradition. In some systems this is also known as the “measure."
A rope used to measure or make measurements in certain magickal spells.
The use of ropes that are knotted in a certain manner in order to focus and release power.
In Europe and among some Pagans, any cereal grains such as wheat, barley, rye, etc., but excluding maize. Curiously, in the US it refers to maize and excludes other grains.
See Crop Circle.
A simple figure made by weaving wheat stalk together to form a symbolic figure that is symbolic of fertility and the harvest-oriented aspects of the Goddess. In most English-speaking countries, the term “corn” refers to any grain except maize. In the US, however, corn only refers to one grain: maize. Corn dollies are not made from the husks or cobs of maize.
A ritual popular among Western Christians that commemorates the institution of the Eucharist and the presence of the body (corpus) of Jesus in the Eucharist. In the Hermetic Order of the Golden Dawn, it is a yearly ritual that consecrates the Vault of the Adepti used for Inner Order rituals.
A method of assigning meaning and interdependent connections to the various aspects of the visible and invisible worlds wherein each color, sound, metal, plant, animal, organs of the human body, or anything in the material world, is said to have its origin in the invisible through specific energetic signatures. Astrology plays a significant role in assigning and deciding correspondences.
The Kabbalah, using the symbolic system of the Tree of Life and numerological associations provided through the Hebrew language, Astrology, and Natural Science, identifies a wide range of correspondences between subjects, planets, herbs, plants, metals, crystals, colors, animals, angels, deities, etc. that allow substitutions of one thing for another, or that augment understanding about one thing by knowledge of another of corresponding value.
Mostly the applications of correspondences are divided into Magical, Medical, Numerical, and Tarot usages.
Divination by sieve suspended by shears or tongs, or by threads from the fingers. People would be named and the movement of the sieve would be interpreted.
The full realization of personal growth potentials related to past-life experiences.
The comprehensive records of the cosmos, which include our personal archives.
A phrase coined by Richard Bucke to describe his own experience of unity with the universal consciousness of the cosmos. Bucke believed this to be the goal of human evolution.
Cosmic Gateway to Power
A Procedure leading to self-empowerment through astral projection facilitated by a framed sheet of clear glass.
Each individual’s unique spiritual or cosmic makeup, which remains unchanged from lifetime to lifetime. Also known as a Spiritual Genotype.
The soul’s unique spiritual identity that exists in perpetuity.
Raised awareness of the cosmic scheme of our existence, accompanied by increased knowledge of our higher self. See Cosmic Actualization.
The universal language of the spirit realm.
Cosmic Life Force
The energizing foundation of all reality, both tangible and intangible.
Used by a small group of astrologers, Cosmobiology and its principal tool, midpoints, was developed in Germany in the 1920s. Cosmobiologists do not use houses or traditional aspects. Instead, they read charts with the midpoints, or the halfway point between any two planets. The theory is that the midpoint is the place where the energies of both planets unite. The midpoint is also considered a sensitive point in predictive astrology. Cosmobiologists use midpoint “trees,” a listing of the midpoints for an individual chart, and not the contacts made by other planets, which result in what they call “planetary pictures.”
A way of explaining the universe and its processes. The word can be used in both scientific and religious contexts.
In theosophy, a term meaning the greater universe—including spiritual planes—and not just the observable physical universe.
Cosmos Wide Web
Modeled after the World Wide Web (WWW) of the Internet, the CWW recognizes existence of the Akashic Records imagined as “infinite-sized and capable data banks and servers, accurately accessed through a speed-of-light fast astral search engine using an imaginary keyboard and large monitor to call up and see those records we desire.
In 1917, in Cottingly, England, two young girls claimed that they not only played with fairies in their yard, but took photographs of them. People such as Sir Author Conan Doyle, creator of Sherlock Holmes, investigated the girls and the photos and proclaimed that the girls were honest and the photos were real. One of the girls, late in her life, finally admitted that the photos were fakes, and one investigator actually discovered the source of the images: double-exposed photos using cut outs from a children’s book. The movie Fairy Tale—A True Story, directed by Charles Sturridge and released in 1997, was based on these events.
Thracian goddess also known as Kotys, she was a goddess of sexuality similar to Dionysus. A riotous festival in her honor was called the Cotyttia, and her followers where known as the baptai, implying a part of what was required to follow her, a baptism. Her worship extended from Thrace to Italy and Sicily.
See Emile Coue.
Council of Ancyra
Also known as the Synod of Ancyra, this was a meeting of Christian elders held in Ancyra in the Roman province of Galatia (modern Ankara, Turkey) in 314 C.E. It’s important because it was the first meeting after the end of Roman persecution of Christianity with the overthrowing of Maximinus the previous year. Among the things they ordered were punishments for those who worshiped (freely or due to force) with Pagans, that clergy should not be vegetarians, that if you’re under 20 years old and have sex with animals you can be admitted to the church, people who commit adultery are not completely accepted in a church for seven years, and women (nothing about men) who perform abortions or make drugs for abortion must serve ten years of penance. It also states that if you perform divinations, follow Pagan traditions, or do magick you must suffer five years of penance.
A suggestion given to a hypnotized person to overcome or “counter” a current belief. Counter-suggestions may even be used to replace deep core beliefs.
A second divination performed to check the correctness of the first one; sort of like a second opinion in medical practices.
The Page, Knight, Queen, and King of each suit of the Tarot. Some decks may use other names, such as Princes and Princesses, for certain court cards.
A group of practitioners of Witchcraft. Traditionally composed of thirteen or fewer people, some covens are as small as two or three while others are much larger in number.
Covenant of the Goddess
Formed in 1975, Covenant of the Goddess is a group of covens and individuals linked to provide communication, as well as legal standing, to the members.
The area around the physical location of a coven (the covenstead). This is traditionally one league [most commonly equal to three miles, but it has been different historically and in different locales] in all directions.
In Wicca, the meeting place of a coven.
Also See: coven
Term used by Witches and Wiccans to mean someone who is an outsider; not of the Craft. Similar to the term “Muggle” used in the Harry Potter novels.
A shorter version of Witchcraft and used instead of that term or Wicca. It is also used by Freemasons to describe their fraternity without publicly naming it.
Also See: Craft, The, The Craft
A generic term for rules set up by covens. There are two basic types of such rules. The first are for behavior among coven member. These include practices within a circle (for example, how to behave) and outside a circle (for example, which member is responsible for certain activities such as calling the members to a meeting). The second type describes what members should do during times of trouble and persecution for the protection of the coven and its members.
Crazy Lace Agate
Crazy lace agate, also known as Mexican agate, is an attractive, white, opaque stone, patterned like a beautiful, multicolored paisley cloth. It is a cryptocrystalline quartz, found in Mexico.
In ancient times, this agate was worn to placate the gods, and to give courage. It will sharpen your sight, help the eyes, illuminate your mind, allow you to be more eloquent and give vitality. It keeps the wearer well-balanced and serious. Lace agate strengthens the Sun in its wearer, and improves the ego and self-esteem. It gives you a feeling of consolation despite the hardships of life. It has been considered symbolical of the third eye, and the symbol of the spiritual love of good. It helps to banish fear. It is a good general healing stone.
A mythic story used to explain the formation and existence of the phenomenal world.
Creative visualization involves the fashioning of an image in the conscious mind and the charging (and constant recharging) of that image by the enormous psychic energy of the unconscious.
A process of using visualizations to affect your unconscious. This will have an effect on the Astral Plane leading to changes on the physical plane.
The appearance and phase of the moon when it is between new and full. Often the image of a partial moon is used as the sign of the High Priestess or of a degree level among some covens.
The largest Greek island in the Mediterranean Sea. It was the center of the Minoan civilization that thrived between about 2600 b.c.e. and 1454 b.c.e. Although it has been ruled by Christians and Moslems, previously it is believed to have been one of the few European civilizations known to have had a period of primarily goddess worship. Cretan Goddess statues show the Her wearing a full skirt and a vest that is open exposing Her breasts. She is shown holding a snake in each hand. Reproductions of this image are popular among many Neo-Pagans.
An apparition seen on (or just before or after) the anniversary of their death.
Term describing types of divination by grain. One method is to observe the appearance of cakes made for ritual consumption, especially cakes made of barley. It also is a term for looking at the dough for such cakes, looking for signs indicating future events. Some also relate this to the practice of tossing grain on the carcasses of animal sacrifices and looking for divinatory patterns in the way the grain falls.
The part of the mind that under normal conditions critically analyzes all information that is received by the mind through the senses (and via self-talk). In hypnosis, the Critical Factor is bypassed, allowing the client to accept suggestions that are within the client’s moral compass.
The careful observation and remembrance of details making their later organization and analysis possible. It is an essential feature of astral and magickal training.
Baking a single symbolic object into one of many food items—cakes, pancakes, cookies, muffins, rolls, etc. The person who is served that piece containing the charm determines his or her future according to traditional meanings:
Also spelled "Crithomancy."
Name of a temple that was an offshoot of the Hermetic Order of the Golden Dawn and Anglican clergymen.
Also see Dolmen.
Divination in onions.
One of the three aspects of the triple goddess, the Crone has the image of an elderly woman. Also, an honored elder in some Pagan traditions. The image of a Witch found in many folk tales is actually that of the Crone. The age is necessary to show the wisdom she has acquired. Physical malformations such as the stooped stance and the infamous nose wart indicate the work she has done for the community as well as her skill as a healer.
A ritual marking a woman’s movement from representing the “mother” aspect of the triple goddess to that of the “crone,” respected for he knowledge and experience.
Sometimes associated with, but not limited to, UFO lore in which 1) often crudely formed "amateur" geometric patterns are found in agricultural fields of such crops as grass, corn, wheat, etc. 2) Finely detailed and "professional" complex geometric patterns in the midst of field crops. In both instances, these large scale "drawings" happen at night. In most of the "amateur" class examples, pranksters have admitted to dragging wood beams to break down the crops. In the case of the "professional" class examples, no human explanations have been uncovered. Theories range from alien communications to unusual natural gravitational or electromagnetic effects. Until understood, the phenomena remains classed as paranormal.
Some sort of message from non-physical entities such as spirits.
In most covens, the process leading to membership and initiation requires a period of study and practice, often a year and a day. Sometimes, to honor leaders of different covens or help establish friendly bonds between them, initiations will be given without such training. That is, a Witch of tradition 1 initiates into that tradition a Witch from tradition 2. In turn, the Witch from tradition 2 initiates into that tradition the Witch from tradition 1. Theologically, this allows covens to work with the specific deities of the other coven. It also allows a High Priest or High Priestess to officiate at another coven. This can be valuable if the person regularly working in this role becomes ill, leaves the coven, or dies.
Cross of Confusion
Supposedly an ancient Roman symbol which questioned the validity of Christianity. It consists of an equal-armed (solar) cross, however the four bars of the cross do not meet at the center. Instead, there is a dot in the middle. The decending bar is replaced by a sickle, giving the appearance of an inverted question mark. Popularized in recent times as a symbol used by the band Blue Oyster Cult.
Cross Quarter Days
The Pagan Wheel of the Year consists of eight major holidays, or Sabbats. Four of these are distinctly solar in nature: the two equinoxes, when the amount of daylight and night time are equal, and the solstices, the time of either the longest day or night. Dividing the time between a solstice and an equinox is a cross quarter day, also known as a fire Sabbat as they feature bonfires. They are commonly named Samhain, Imbolc, Beltane, and Lammas.
A crossing spell is the name for a folk magick rite that puts a curse or cross (in the form of an “X”) on a person
Crossing the Bridge
A Wiccan rite of passage performed at the death of a loved one. Corresponds to the concept of a funeral. Different Wiccan groups have various Crossing the Bridge rituals, ranging from focusing on a spiral dance—representing the spiral of life—to a re-enactment of the famous goddess descending to the underworld and returning. Will often include a celebratory aspect with feasting, drinking, storytelling, and dance in honor of the deceased.
The intersection of two roads, such a location is sometimes a locus of paranormal activity. This may be related to the ancient worship of the Greek goddess Hekate. Besides being the goddess of the home, of newborns, and of Witches, she was also considered the goddess of the crossing of three roads. Often, a small pile of stones at such sites would mark the location of her worship. By the end of the sixteenth century, such worship was downplayed and the location of three roads (tri via) resulted in a word, trivial, meaning ordinary, commonplace, or vulgar. In the early 1900s, Trivialities was the title of a book by L.P. Smith which popularized the term as meaning “things of little consequence.”
Also See: crossroad
Aleister Crowley (1875-1947) was the foremost ceremonial magician of the first half of the 20th century. He was born in Leamington, England, on October 12, 1875, the son of fanatical Plymouth Brethren. His mother called him the Beast of Revelation, whose number is 666, and Crowley embraced this identification. He attended Cambridge and began to study occultism. He was an accomplished chess player, mountain climber, and poet. In 1898, he joined the Hermetic Order of the Golden Dawn. In 1903, he married Rose Kelly. In 1904, while on an extended honeymoon with Rose in Cairo, he received The Book of the Law from a "praeternatural" entity calling himself Aiwass. This book identified Crowley as the Logos of a New Aeon, and Crowley spent the rest of his life trying to spread the new religion. He died in a rooming house in Hastings on December 1, 1947.
Aleister Crowley (1875-1947) was one of the most controversial figures in recent Western occultism. He inherited a considerable fortune, and died a pauper. He had great intellectual genius and wasted a lot on shocking the world as he knew it with occasional bizarre antics and lifestyle. He was trained in the Hermetic Order of the Golden Dawn and later formed his own Order of the Silver Star and then took over the O∴T∴O∴ (Ordo Templi Orientis). He was a prolific and capable writer of magick technology, and is best known for his transcription of The Book of the Law received from a spirit named Aiwass proclaiming Crowley as the Beast 666 in the Book of Revelations and announcing a New Aeon of terror and advancement for the world. His magickal books and his Thoth Tarot Deck are worth study.
One of the power centers found in spiritual traditions with a source in the Indian subcontinent. The Crown Chakra (Sahasrara in Sanskrit) is physically associated with the central chest area. When this chakra is overcharged or undercharged with spiritual energy, it implies that you are dealing with issues of spirituality and direct communication with the Divine. When appropriately charged it is said to indicate that you have freed yourself from those attachments in life that lead to disappointment and unhappiness. When the Kundalini energy rises and fully excites this chakra, it releases (or triggers the release) of a substance known as “amrita,” a special fluid that is supposed to grant immortality. It is symbolized by a lotus flower that has twenty circular sets of fifty petals, each, totaling one thousand petals.
Crown Chakra (Sahasrara) Correspondences
Alchemical Planet: Mercury, Uranus
1 Inner teacher
Suggested Reading: Dale: The Subtle Body: An Encyclopedia of Your Energetic Anatomy
A term used by some Spiritualists to describe all aspects of mental mediumship. This is where there are no physical phenomena produced; however, the medium is able to bring forth messages to the sitters. The term was coined by Charles Richet.
Supposed original thoughts that are actually the recall of forgotten memories. These may be mistaken as paranormal revelatons and may explain some past-life memories.
The search for and scientific study of animals whose existence is mythic or unproven, ranging from the sheep with the Golden Fleece to Bigfoot and "Nessie" (the Loch Ness Monster).
A crystal is a solid material with a regular internal arrangement of atoms. Because of this orderly composition, it may form the smooth external surfaces called faces that allow us to see into the crystal when it is clear. Most all stones are made in part of silica. The presence of this silica is what gives crystals their luminosity and crystal clearness. Crystal is brittle—as we are—and as such is a reflection of ourselves. As we shatter our being, it is seen to be rigid and crystalline in structure. Crystals are described as being clear, milky, having rainbow prisms within, or having fractures visible along their length.
A sphere made of a crystal, typically quartz. The fewer occlusions and imperfections it has increases its value. Often used as a focal point for concentration with a purpose of divination.
A round ball of quartz crystal or glass used as focal point in skrying. Gazing at the ball, one enters into a trance-like state where dream like scenes and symbols are seen and interpreted. Similar aids are the Magic Mirror, a pool of black ink, a piece of obsidian.
Also known as Crystallomancy, a technique typically using a crystal ball to engage and liberate the mind's psychic powers while promoting a state of general self-empowerment. Crystal gazing opens the channels of the mind and permits the free expression of multiple inner faculties.
Crystal Gazing Focal Shift Technique
A meditational technique generating an empowered mental state conducive not only to increased psychic awareness but personal insight as well.
Crystal Gazing Focusing Procedure
A program to stimulate psychic faculties and awareness expansion using a crystal ball.
See Crystal Ball.
A technique that uses an out-of-body session to re-experience past-life origins of traumas that have resulted in negative effects in you current life and liberating yourself from their impact.
Crystalline Sphere Technique
A psychotherapy technique designed to explore past-life influences and extinguish fears associated with unresolved past-life trauma.
Divination with a Crystal Ball.
Crystals and Gems
A crystal is a solid material with a regular internal arrangement of atoms. Because of this orderly composition, it may form the smooth external surfaces called faces that allow us to see into the crystal when it is clear. Most all stones are made in part of silica. The presence of this silica is what gives crystals their luminosity and crystal clearness. Crystal is brittle—as we are—and as such is a reflection of ourselves. As we shatter our being, it is seen to be rigid and crystalline in structure. Crystals are described as being clear, milky, having rainbow prisms within, or having fractures visible along their length.
Crystals and Gems
Types of stones given value by their beauty of color, clarity, and shape. The term is usually attributed to those stones of color that are not clear quartz by those who use such stones for mental, physical, or spiritual healing and attunement. Each gem is associated with various qualities or powers determined by historical attributions (amethyst, for example, is said to prevent drunkenness), color magic, or modern research and experimentation.
Cube of Space
A model, found in the Sepher Yetzirah, of how the invisible energies expressed by the Hebrew alphabet interact with one another to create the invisible worlds.
(koo-khullin) - The great epic hero of old Ulster stories such as the "Cattle Raid of Cooley." He was the incarnation or manifestation of the Celtic high god of Lugus (Lugh or Llew).
From the Latin cultus, meaning “care, cultivation, worship” by way of the French culte. In English it was originally used in the 17th century to mean “worship” or “a particular form of worship.” It referred to the homage paid to a deity. Thus, Christianity, Judaism, Protestantism, Islam, etc. are all cults by the original meaning of the term. It went out of use in the 18th century, but was revived in the middle of the 19th century as a descriptive term of ancient or primitive forms of worship. Thus, Shamanism, Druidism, and Paganism are cults according to the 19th century definition. In recent years the term has come to mean a group, frequently relatively small in number, that is perceived as spreading false teachings, taking advantage of members or outsiders, and/or is “evil.” In this sense it is sometimes used as an epithet by members of one group—usually larger in number and having a longer historical existence—against those of a group that the first one doesn’t like. Thus, to some sects of Christians, Pagans, Satanists, the Seventh Day Adventists, and the Church of Jesus Christ of the Latter Day Saints (the Mormons) are types of cults.
In the early 19th century, the term also started to be used to describe extreme devotion to a person or thing. Today this is popularized in the expression, “cult of personality.” That is, some people regard a leader or product with misplaced or excessive admiration, ignoring or denying any facts that would show this admiration to be misplaced.
The use or outright adoption of concepts, beliefs, practices, etc. taken from one culture—usually a minority culture—by another, usually dominant culture. While this is a common and natural occurance in the evolution of cultures and societies, in some instances the appropriation is partial (to fit in with preconceived notions of the dominant society) and even incorrect, leading persons in the minority society to consider it theft and a destruction of their beliefs and practices. Thus, in many instances cultural appropriation is considered a negative practice.
Named after Stuart Cumberland (born Charles Garner, 1857–1922) who astounded European audiences with the appearance of true mind reading abilities. He appears to have used Contact Mind Reading.
Derived from early pictographic writing, cuneiform consists of wedge-shaped marks that, when placed together, form words. Cuneiform was written by pressing the ends of prepared reeds into soft clay tablets and cylinders. It was in wide use in Sumer, Babylon, and Assyria.
A form of oral sex in which the clitoris and labia are stimulated with the mouth and tongue.
Common term for the container in rituals that hold consecrated water or wine. In ceremonial magick orders, this tends to be in the form of a stemmed goblet or chalice. It is also the tool or “weapon” used by magickians to represent and manifest the magickal element of Water.
The shape of upper lip, a reference to the Roman God of love, an implicit recognition of the erotic appeal of that body part.
The shape of upper lip, a reference to the Roman God of love, an implicit recognition of the erotic appeal of that body part.
One of the four suits of the Tarot, corresponding to the modern Hearts and to the clergy of medieval society. In the system of the Golden Dawn, corresponds to the element of Water and the first Heh of Tetragrammaton.
Also See: Coupes, Chalices
Pronounced “kur-ahn-dehr-rah,” it is Spanish for “female healer.”
Pronounced “kur-ahn-dehr-ro,” it is Spanish for “healer,” it differs from doctor or nurse (in Spanish, doctor or enfermera) in that it refers to a person who uses alternative healing methods, including herbal methods and magic for healing. In some ways, Curandero also means a “good Witch.”
A type of alchemical container. In sex magick, the vagina.
A spell or ritual—or the result of the spell or ritual—used to harm someone or punish the person. It may be transferable to the family of the cursed person. A curse may also cause a building, location, or object to bring “bad luck” to an owner or renter.
A dividing line between signs or houses in a chart.
The theory expressed by Fritz Leiber in his novel, Conjure Wife –
"The way nails sometimes insist on bending when you hammer, as if they were trying to. Or the way machinery refuses to work. Matter’s funny stuff. In large aggregates, it obeys natural law, but when you get down to the individual atom or electron, it’s largely a matter of chance or whine."
Leiber, Fritz: Conjure Wife, 1968, Award Books, Universal Publishing & Distribution Co., New York
See Cosmos Wide Web.
The "new" Astral Plane. The role of the Internet as a search engine duplicates the memory resources of the Akashic Records; the instant transfer of communications via e-mail duplicates Mental Telepathy; the Social Networking tools duplicates the Astral Body as a kind of magic mirror; the role-playing Avatar duplicates the projected Body of Light.
The expanding use of the Internet blends with similar functions on the Astral Plane to a degree that trains the user to function more directly, more consciously, in the subconscious mind and overcoming the barriers that previously existed.
In Theosophy, the concept (also known as the "Law of Cycles") that nature repeats everywhere. However, it is not an exact repetition, as with each new cycle there is a modification of the previous cycle.
Divination by means of letters and numbers painted on a turning wheel of fortune. | 1 | 3 |
<urn:uuid:f731939c-fc39-4f74-902f-26f5492ab2c8> | Workers World Party
|Workers World Party|
|Chairman||Larry Holmes (First Secretary)|
|Headquarters||55 W. 17 St. New York, NY 10011|
|Youth wing||Fight Imperialism Stand Together|
|Political position||Fiscal: Socialist economics
Social: Revolutionary socialism
|Workers World Party|
|Part of a series on|
the United States
|Parties and organizations|
Workers World Party (WWP) is a Communist-leaning socialist political party in the United States, founded in 1959 by a group led by Sam Marcy. Marcy and his followers split from the Socialist Workers Party in 1958 over a series of long-standing differences, among them Marcy's group's support for Henry A. Wallace's Progressive Party in 1948, the positive view they held of the Chinese Revolution led by Mao Zedong, and their defense of the 1956 Soviet intervention in Hungary, all of which the SWP opposed.
WWP describes itself as a party that has, since its founding, "supported the struggles of all oppressed peoples". It has recognized the right of nations to self-determination, including the nationally oppressed peoples inside the United States. It supports affirmative action as necessary in the fight for equality. As well, it opposes all forms of racism and religious bigotry. Workers World and YAWF were noted for their consistent defense of the Black Panthers and the Weather Underground along with Vietnam Veterans Against the War and the Puerto Rican Independence movement. Workers World Party was also an early advocate of gay rights, and remains active in this area.
Since then, the Workers World Party has been controversial for its support of Slobodan Milosevic, Saddam Hussein, Kim Jong-il, and the Chinese crackdown on the “counter-revolutionary rebellion” in Tiananmen Square.
The WWP has published Workers World newspaper since 1959, a weekly since 1974.
The distant origins of the WWP go back to the Global Class War Tendency, led by Sam Marcy and Vincent Copeland, within the Socialist Workers Party. This group first crystallized during the presidential election of 1948 when they urged the SWP to back Henry Wallaces's Progressive Party campaign, rather than field their own candidates. Throughout the 1950s the GCWT expressed positions at odds with official SWP policy, categorizing the Korean War as a class, rather than imperialist, conflict; support of the People's Republic of China as a workers' state, if not necessarily supporting the Mao leadership; and supporting the suppression of the Hungarian Revolution by the Soviet Union in 1956.
The Global Class War Tendency left the SWP in early 1959. In their May day issue of their new periodical, its third number, the group proclaimed "We are THE Trotskyists. We stand 100% with all the principled positions of Leon Trotsky, the most revolutionary communist since Lenin". The sect appears to have organized officially as the Workers World Party in February 1960. At its inception the WWP was concentrated among among the "working class" in Buffalo, Youngstown, Seattle and New York. A youth organization, first known as the Anti-Fascist Youth Committee, and later as Youth Against War and Fascism was created in April 1962.
From the beginning both the WWP and the YAWF concentrated their energies on street demonstrations. Early campaigns focused on support of Patrice Lumumba, opposition to the House Un-American Activities Committee, and against racial discrimination in housing. They conducted the first protest against American involvement in Vietnam on August 2, 1962. Their opposition to the war also included the tactics of "draft resistance" and "GI resistance". After organizing demonstrations at Fort Sill, Oklahoma in support of a soldier being tried for possessing anti-war literature, they founded the American Servicemens Union, intended to be a mass organization of American soldiers. However, the group was completely dominated by the WWP and YAWF.
During the late 1960s and 1970s the Party threw itself into protests for a number of other causes, including "defen[se] of the heroic black uprisings in Watts, Newark, Detroit, Harlem" and women's liberation. During the Attica Prison riot the rioters requested a YAWF member, Tom Soto, to present their grievances for them. The WWP was most successful in organizing demonstrations in support of desegregation "busing" in the Boston schools in 1975. Nearly 30,000 people attended the Boston March Against Racism, which they had organized. Also during the 1970s they attempted to begin work inside organized labor, but apparently were not very successful.
In 1980 the WWP began to participate in electoral politics, naming a presidential ticket, as well as candidates for New York Senate, congressional and state legislature seats. In California they ran their candidate, Deidre Griswold, for in the primary for the Peace and Freedom Party nomination. They came in last with 1,232 votes out of 9,092. In 1984 the WWP supported Jesse Jacksons bid for the Democratic nomination, but when he lost in the primaries they nominated their own presidential ticket, along with a handful of congressional and legislative nominees.
Ideological background and platform
While the party originally considered itself Trotskyist, is soon began to cease referring to Trotsky in their organ or to carry much, if any, Trotskyist literature. In its first decade the group leaned more to Maoism, while still considered itself to have "the kind of political independence that enables revolutionaries to speak up if they see that the cause is being damaged by the policies of the leadership of socialist countries." They supported the Peoples Republic of China on the issues of the 1959 Tibetan uprising and the Sino-Indian Border War of 1962, and endorsed both the Great Leap Forward and the Great Proletarian Cultural Revolution, but criticized their characterization of the USSR as social imperialist, fearing that it would lead to Sino-American reproachment. The party was particularly attracted to Lin Biao, praising the inclusion of him in the preamble to the 1969 Chinese Constitution. They felt that the disappearance of Lin and his associates mark "the end of an entire stage of the Cultural Revolution." They grew increasingly critical of Communist China after 1971, especially their closer relations to the west and supported the "radical faction" within China that opposed this course. After the fall of the Gang of Four in 1976 they considered the Chinese leaders "reaction" and "attacking the revolutionary domestic achievements of the Mao era". By the mid 1980s the only trace of Trotskyist ideology still espoused by the WWP was the idea of the USSR and other Communist controlled countries as degenerated workers' states who had to be defended against imperialism even if their leaderships needed to be criticized.
Ideologically, the WWP is orthodox Marxist-Leninist. The Party's Trotskyist origins are reflected in much of Sam Marcy's early literature. However, Marcy also continued to uphold the USSR as a socialist state until the very end. When the Provisional Organizing Committee to Reconstitute a Marxist-Leninist Communist Party was formed, the WWP included a friendly headline directed to them, "Welcome, Comrades!" in Workers World newspaper. The Provisional Organizing Committee replied by telling them, "Trotskyism is Counter-Revolution and Nothing Else!". Following this, "virtually all mention of Trotsky vanished forever from its pages."
Activities and organizational structure
The WWP has organized, directed or participated in many coalition organizations for various causes, typically anti-imperialist in nature. The International Action Center, which counts many WWP members as leading activists, founded the Act Now to Stop War and End Racism (ANSWER) coalition shortly after 9/11, and has run both the All People's Congress (APC) and the International Action Center (IAC) for many years. The APC and the IAC in particular share a large degree of overlap in their memberships with cadre in the WWP. In 2004, a youth group close to the WWP called Fight Imperialism Stand Together (FIST) was founded.
Workers World Party has regional branches in 20 major US cities. The Party receives donations and contributions as the source of its funding, while volunteers/cadres run the day to day operations of the Party. WWP is led by an internally elected secretariat. Currently, the Secretariat is made up of six people: Deirdre Griswold, Larry Holmes, Fred Goldstein, Monica Moorehead, Sara Flounders, and Teresa Gutierrez. The WWP has participated in presidential election campaigns since the 1980 election, though its effectiveness in this area is limited as it has not been able to get on the ballots of many states. The Party also has run some campaigns for other offices. One of the most successful was in 1990, when Susan Farquhar got on the ballot as a US Senate candidate in Michigan and received 1.3% of the vote. However, the Party's best result was in the 1992 Ohio US Senate election, when the WWP candidate received 6.7% of the vote, running against a Democrat and a Republican.
WWP and North Korea
The WWP has maintained a position of support for the government of North Korea. Through its Vietnam-era front organization, the American Servicemen's Union (ASU), the party endorsed a 1971 statement of support for that government. The statement was read on North Korea's international radio station by visiting ASU delegate Andy Stapp. In 1994, Sam Marcy sent a letter to Kim Jong Il expressing his condolences on behalf of the WWP with the passing of his father Kim Il Sung, calling him a great leader and comrade in the international communist movement. Its more recent front groups, IAC and (formerly) International ANSWER, have also demonstrated in support of North Korea.
Disagreement with other leftists
This is seen in disagreements over analysis of whether or not a particular country is socialist (e.g. Cuba, North Korea or the People's Republic of China) and also positions historically held by the Party (e.g., support for Soviet intervention in Afghanistan, Czechoslovakia and Hungary). It is also seen in disagreements over WWP calls for solidarity with governments that it sees as being socialist, anti-imperialist, or any country facing the threat of being attacked by the United States. WWP also faces opposition from ideological groups that are critical of other Marxist-Leninist and Trotskyist parties. On the political left, this criticism comes from anarchists, social democrats and the liberal left. The political right is also often opposed to any communist party or socialist organization. When the WWP was playing a role in organizing anti-war protests before the US attack on Iraq in 2003, many newspapers and TV shows attacked the WWP specifically.
In 1968 the WWP absorbed a small faction of the Spartacist League that had worked with it in the Coalition for an Anti-Imperialist Movement called the Revolutionary Communist League. This group left the WWP in 1971 as the New York Revolutionary Committee. The NYRCs newspaper provided rare details about the internal functioning of the group that have subsequently been used by scholars as a primary source. The NYRC later reconsitituted as the Revolutionary Communist League (Internationalist).
In 2004, the WWP suffered its most serious split when a few dozen members of WWP left to form the Party for Socialism and Liberation. The ANSWER coalition aligned itself with the PSL and Workers World Party then founded the Troops Out Now Coalition. The split included many of the top leaders of the WWP which included most of the membership of the WWP on the West Coast.
To date, neither party has officially given any reason for the split. PSL maintains a nearly identical political line.
Presidential Tickets
|1980||Deirdre Griswold||Gavrielle Holmes||13,285 (0.02%)|
|1984||Larry Holmes, in some states Gavrielle Holmes||Gloria LaRiva||17,985 (0.02%)|
|1988||Larry Holmes||Gloria La Riva||7,846 (0.01%)|
|1992||Gloria La Riva||Larry Holmes||181 (0.00%)|
|1996||Monica Moorehead||Gloria LaRiva||29,083 (0.03%)|
|2000||Monica Moorehead||Gloria LaRiva||4,795 (0.00%)|
|2004||John Parker||Teresa Gutierrez||1,646 (0.00%), includes votes on the Liberty Union Party line in Vermont|
|2008||No candidate, endorsed Cynthia McKinney||No candidate, endorsed Rosa Clemente||n.a.|
- "Selected Works of Sam Marcy". Workers World. Retrieved October 2, 2008.
- Goldberg, Michelle (October 18, 2011). "One Percent". Tablet Magazine.
- Alexander 1991, p. 911.
- Alexander 1973, p. 554.
- Alexander 1991, p. 912.
- Klehr, Harvey (1988) Far Left of Center
- Alexander 1991, pp. 912-913.
- Alexander 1991, p. 913.
- Alexander 1991, p. 914.
- Alexander 1991, p. 915.
- Alexander 1991, p. 916.
- "Roots of the Workers World Party". Marx Mail. Retrieved October 2, 2008.
- 2002 "Vote for U.S. Senate". Ballot Access News. January 1, 2003. Retrieved September 22, 2008.
- "Workers World Party and Its Front Organizations" (April 1974) US House Committee on Internal Security
- Marcy, Sam (July 21, 1994). "Kim Il Sung - Anti-imperialist fighter, socialist hero".
- Carlson, Peter (15 Dec 2002). "The Crusader: Ramsey Clark Was LBJ's Attorney General. Now He's Busy Denouncing U.S. 'War Crimes' in Places Like Iraq, N. Korea. How Did That Happen?". The Washington Post.
- Cooper, Marc (September 29, 2002). "A Smart Peace Movement is MIA". Los Angeles Times.
- Gitlin, Todd (October 14, 2002). "Who Will Lead?". Mother Jones magazine.
- Corn, David (November 1, 2002). "Behind the Placards: The odd and troubling origins of today’s antiwar movement". LA Weekly.
- Alexander 1991, pp. 913, 941-3, 1049.
- "Socialism and Liberation magazine is changing". pslweb.org. Retrieved June 7, 2008.
- Leip, Dave. "United States Presidential Election Results". US election atlas.
- Committee on Internal Security, House of Representatives (1974). The Workers World Party and Its Front Organizations. Washington: US Congress.
- Alexander, Robert (1991). International Trotskyism: a documented analysis of the world movement. Durham: Duke University Press.
- Alexander, Robert (1973). "Schisms and unifications in the American Old Left 1953-1970". Labor History 14 (Fall 1973).
Further reading
- Roots of the Workers World Party by Ken Lawrence, Marxmail Discussion List. January 1999. Retrieved April 12, 2005.
- Politics 1 Guide to US Political parties contains brief entry on WWP.
- "A Clarification on Sam Marcy and Henry Wallace" correspondence on the early history of the Global Class War tendency
- "Peace Activists" with a Secret Agenda Part Three: Stealth Trotskyism and the Mystery of the WWP“ by Kevin Coogan
Prominent Members
- Workers World Party homepage
- Fight Imperialism - Stand Together, Youth group affiliated with Workers World Party
- The global class war and the destiny of American labor by Sam Marcy New Haven, CT : Distributed by Revolutionary Communist League (Internationalist), 1979 (a foundational document of the "Global Class War tendency")
- The class character of the Hungarian uprising : proposed resolution on the class character of the Hungarian uprising : November 3, 1956 by V. Grey New York, reissued by Workers World, 1959 (another foundational document of the "Global Class War tendency") | 1 | 6 |
<urn:uuid:acfbc0c9-4301-4f1d-96bf-efc9d4133597> | The History of the ROTC
The origins of military instruction in civilian colleges dates back to 1819 when CPT Alden Partridge founded the
American Literary, Scientific and Military Academy, at Norwich, Vermont. Today, it is Norwich University in Northfield,
VT. In 1862 the U.S Congress recognized the need for military training at civilian educational institutions. The Morrill
Land Grant Act was enacted to fulfill this need. This Act donated lands and money to establish colleges which would
provide practical instruction in agriculture, mechanical and military sciences.
The United States Army Reserve Officers' Training Corps (ROTC) as we know it today dates from the National
Defense Act of 1916. World War I prevented the full development of civilian educators and military professionals
working together. At the conclusion of World War I, the program was fully implemented on college campuses. The
success of this effort was demonstrated in World War II, Korea, Vietnam and the Gulf War. College campuses provided quality officers to meet the rapidly expanding needs of mobilization. In 1964 the ROTC Vitalization Act improved the program by adding scholarships and expanding junior ROTC opportunities. The inclusion of women in the program in
1973 was another important milestone.
Today, Army ROTC opportunities are available across the country at almost three hundred host units, as well as
hundreds of partnership schools.
The Golden Eagle Battalion’s History
Mississippi Southern College Reserve Officer Training Corps (ROTC) was activated on April 3, 1950, as an Artillery
unit by an act of Congress. The first Professor of Military Science was LTC Harrison Finlayson. Under LTC Finlayson's leadership, enrollment in the program increased to 232 cadets by 1952. This was also the year the first class of
cadets were commissioned as 2nd Lieutenants. There were 30 commissionees in the class, of which four received commissions as Regular Army Officers. Also in 1952, a Military Ball was held to honor the first commissioning class.
The ball became an annual event and is still held in honor the commissionees from each class.
Throughout the 1950s and 1960s the program thrived. Approximately 35 cadets were commissioned each year.
During those early years, the ROTC program received tremendous support from the university administration.
This was especially true while Dr. William McCain (Major General-retired) was president of Southern Miss from
1955 to 1975.
As in most of the country, ROTC at this institution suffered a drop in enrollment during the 1970s but continued to commission officers into the U.S. Army. In 1972, the Southern Miss ROTC Detachment gained approval to begin
teaching the Basic Course of Instruction at area junior colleges. In 1975, Dr. Aubrey K. Lucas began his 22-year
tenure as president of the university. Under his leadership, the university and the ROTC program continued to grow.
By 1977, with the addition of William Carey College into the Basic Program, there were extension centers and six
cross-enrolled institutions affiliated with the ROTC program.
In the early 1980s, the negative effect that the Vietnam War had on the military and ROTC programs across the
country began to abate. Under COL Tommy Palmertree in 1982-1983, enrollment increased to 2,053 from 1980's
enrollment of 734. In 1982-1983, the Southern Miss ROTC Department was the largest ROTC unit in the nation.
Throughout the 1980s, this Detachment commissioned an average of 42 lieutenants annually, with 60 being
commissioned in 1988. Of these 60, 11 were selected for commissioning in the Regular Army.
During the 1988-1989 school year, the Southern Miss ROTC detachment program was designated as a battalion
and the basic program was withdrawn from area junior colleges. This severely impacted the number of students
enrolled in the program and cut by over 50 percent the number of commissionees in 1990, 1991, and 1992. The
number of commissionees was also affected during this timeframe by Operation Desert Storm and curtailment of
the Early Commissioning Program (ECP). Under LTC David G. Senne, 1989-1994, the battalion successfully
repostured itself. The number of scholarship cadets increased with the added incentive of limited free room and
board scholarships provided by the university. The Battalion became known for commissioning Army Nurses and
was one of the top 25 Army ROTC nursing programs in the nation.
From 1993-1999, the Battalion successfully met its commissioning mission. An average of 20 2nd Lieutenants
were commissioned each year, with an active duty selection rate of 95 percent better.
In July 1997, the first female Professor of Military Science, LTC Sheila Varnado, took command of the battalion,
and provided excellent leadership until her selection for promotion and reassignment in July of 1999.
In the fall of 1999, the battalion moved to the George Hurst building. This marked the first change of location
since the program's inception in 1950.
In 1999, LTC Kevin Dougherty took command of the Golden Eagle Battalion as the Professor of Military Science.
LTC Dougherty provided great vision and guidance until his retirement in 2005.
Upon LTC Dougherty’s retirement, LTC Chuck Mitchell took command of the Golden Eagle Battalion bringing in
fresh ideas and new visions. LTC Mitchell retired in 2009. MAJ Joseph W. Power, Iv is the current Professor of
Military Science and is bringing a new idea of activites and recruiting to the Battalion .
Founded by Legislative Act on March 30, 1910, The University of Southern Mississippi was the state’s first
state-supported teacher training school. Originally known as Mississippi Normal College, the school was built on
120 acres of cutover timber land donated by Messrs. H.A. Camp, A.A. Montague and Dr. T. E. Ross, and funded
by bonds issued by the city of Hattiesburg and Forrest County in the amount of $250,000. A close relationship
between the university, city, and county is still maintained today.
The school’s stated purpose was to “qualify teachers for the public schools of Mississippi.” Mississippi Normal
College opened for classes Sept. 18, 1912, and hosted a total of 876 students during its initial session (506 in
the regular session and 370 in the summer term).
The first president, Joseph Anderson “Joe” Cook, oversaw construction of the original buildings and guided the
school during its formative years. Cook served as superintendent of the Columbus, Miss., city schools prior to being
selected as president of MNC. The school’s five original buildings were College Hall (the academic building); Forrest
County Hall (men’s and married students’ dormitory); Hattiesburg Hall (women’s dormitory); the Industrial Cottage
(training laboratory for home management); and the president’s home (now the Alumni House). Prior to 1922, the
school awarded certificates, which required at least two terms of attendance, and diplomas, which required at least
six terms. In 1922, the school was authorized to confer the baccalaureate degree, the first of which was awarded in
May 1922 to Kathryn Swetman of Biloxi.
In 1924, the school underwent the first of a series of name changes. On March 7, 1924, Mississippi Normal College
became State Teachers College. Many improvements were instituted following the name change as STC pursued accreditation by the Southern Association of College and Secondary Schools (SACS). One of the improvements was construction of the Demonstration School in 1927, which served as a training ground for student teachers. Sadly, on September 28, 1928, at the behest of Gov. Theodore G. Bilbo, President Cook was summarily dismissed by the STC
Board of Trustees. The reason given was Cook’s age (65), but onlookers saw it as a political ploy because Cook had
not supported Bilbo in the recent gubernatorial election.
The Board of Trustees selected supervisor of Rural Schools Claude Bennett to succeed Joe Cook as president. Many
of the faculty and staff remained loyal to the former president and viewed Bennett with suspicion. Nevertheless, it was
during the Bennett administration that the school was approved for membership in the Southern Association of Colleges
and Secondary Schools in 1929. Moreover, enrollment continued to increase, extension courses were offered in 25 Mississippi counties, and a strong music program was set in motion. Unfortunately, Gov. Bilbo continued to meddle
in the internal affairs of State Teachers College and the other state-supported institutions of higher learning. As a
result, SACS revoked the schools accreditation in 1930.
In 1932, due to the Great Depression, the state was unable to pay faculty salaries. Fortunately, Hattiesburg banks
arranged signature loans for hard-pressed faculty members, and grocery stores extended credit to those with good
payment records. In 1932, a single board of trustees was created to oversee all of Mississippi’s institutions of higher learning. This body replaced the separate boards of trustees under which the institutions had previously operated. Uppermost on the new board’s agenda was removing political appointees of Gov. Bilbo, so, in 1933, President Bennett
Dr. Jennings Burton George, a Mississippi Normal College alumnus, became the school’s third president July 1, 1933,
and the first to hold a doctorate. The new chief executive inherited a huge debt, which he corrected by setting strict
financial guidelines, cutting employees’ salaries, and freezing departmental budgets. His efforts not only resulted in
a balanced budget, but each year of his administration ended with a small surplus in the treasury. On February 13,
1940, the school’s name was changed for the second time. Its new name, Mississippi Southern College, reflected
the fact that it was no longer exclusively a teachers’ college. During World War II, enrollment plummeted to around
300 as students and faculty members joined, or were drafted into, military service. Both head football coach Reed
Green and his assistant, Thad “Pie” Vann, served in the armed forces. Looking ahead to the end of the war, President George established a $35,000 trust fund to provide scholarships for returning veterans. He also proposed graduate
work in education, home economics, and music. But, in January 1945, before any of his plans were implemented, the
Board of Trustees declined to rehire Dr. George, giving no definitive reason for its action. The school is deeply
indebted to President George, for it was his sound fiscal policies and managerial genius that steered it safely through
both the Great Depression and World War II. Dr. Robert Cecil Cook became the institution’s fourth president, following
his discharge from the Army on July 6, 1945. President Cook, whose credentials as an educator were impeccable, placed academic development at the top of his agenda. During his tenure, the Graduate Studies division was created, and the Reading Clinic, the Latin American Institute, and the Speech and Hearing Clinic were established. Greek presence on
campus was increased, the band program was expanded, the “Dixie Darlings” precision dance team was formed, and enrollment soared to more than 2,000. The athletic program was strengthened, as coaches Reed Green and Pie Vann returned from military service and resumed their former positions.
Over the next two decades, the combined efforts of these two outstanding coaches brought national recognition to the Southern Miss football program. In December 1954, Cook became the first president to leave the office voluntarily when
he resigned to accept the position as vice president and general manager of the Jackson State Times, a new daily newspaper. Dr. Richard Aubrey McLemore was named acting president, effective January 1, 1955, and served in that capacity until August 17, 1955. Dr. McLemore, known to the students as “Dr. Mac,” had been a faculty member at MSC
since 1938, and had served as professor of history, head of the social studies division, and dean of the college.
The Board of Trustees selected State Archivist Dr. William David McCain as the school’s fifth president, and he assumed
the office August 18, 1955, promising to keep the campus “dusty or muddy with construction.” At least 17 new buildings
were erected during the McCain administration, including Reed Green Coliseum. Dr. McCain’s driving ambition, however,
was to achieve university status for MSC, a drive that was sponsored by the Alumni Association. To that end, he
reorganized the academic programs into colleges and schools, and on February 27, 1962, Gov. Ross Barnett signed the
bill that made Mississippi Southern College a university: The University of Southern Mississippi. The second watershed
event of the McCain administration occurred in September 1965 when, for the first time in the school’s history, African-American students were admitted. The first students were Raylawni Young Branch and Gwendolyn Elaine Armstrong. Other noteworthy events of the McCain era include formation of the Oral History Program in 1971 and establishment of the Southern Miss Gulf Park Campus in 1972. Also in 1972, the nickname of the athletic teams was changed from
“Southerners” to “Golden Eagles.” Dr. McCain retired from the presidency June 30, 1975. During his 20-year presidency, enrollment grew to 11,000. On July 1, 1975, Dr. Aubrey Keith Lucas became the sixth president of Southern Miss, having served as instructor, director of admissions, registrar, and dean of the Graduate School, in addition to holding both bachelor’s and master’s degrees from the school.
Among the accomplishments that punctuated the Lucas years were the formation of the Teaching and Learning
Resource Center; creation of the Faculty Senate; establishment of the Center for International Education; replacement
of the quarter system with the semester system; creation of the Polymer Science Institute; reorganization of the
university’s 10 schools into six colleges; formation of the Institute for Learning in Retirement; and affiliation with the
new athletic conference, Conference USA. After 21 years, Dr. Lucas stepped down from the presidency December 31,
1996, saying it was time for someone new.
Dr. Horace Weldon Fleming, Jr. assumed his duties as the university’s seventh president January 3, 1997. During his
tenure, the School of Nursing became a college, the Office of Technology Resources was created; a master’s program
in hydrographic science was added in the Department of Marine Science; a master’s program in workforce training and development was added in the School on Engineering Technology; and online classes were instituted.
In addition, a strategic plan for the future was unveiled. Designed to plot the university’s course over the next three
to five years, the plan envisions Southern Miss as “a national university for the Gulf South.” In 2001, Dr. Fleming
introduced the public phase of a $100 million comprehensive campaign. Dr. Fleming resigned the presidency in July
2001, and President Emeritus Dr. Aubrey Keith Lucas was selected to serve until the Board of Trustees of Institutions of Higher Learning hired a new president. On May 1, 2002, Dr. Shelby Freland Thames became The University of Southern Mississippi’s eighth president. Thames has an extensive history at Southern Miss, starting in 1955 when he walked onto
the campus as a student earning his bachelor’s and master’s degrees from The University of Southern Mississippi in chemistry and organic chemistry. His previous administrative positions at Southern Miss were chair of the Department of Polymer Science, dean of the College of Science and Technology, vice president for Administration and Regional
Campuses, and executive vice president. In 1970, he was the founder of the Department of Polymer Science, and, in
1973, cofounder of the Waterborne and High-Solids Coatings Symposium. He was an inductee, in 1998, to Southern
Miss’s Alumni Hall of Fame, and in that same year, the Polymer Science Research Center was named in honor of
Dr. Thames and is now known as the Shelby Freland Thames Polymer Science Research Center.
During Thames’ presidency, the state college board voted unanimously to establish a second campus for The University
of Southern Mississippi, and on August 19, 2002, Southern Miss admitted its first class of freshmen on its Gulf Park
Campus, making the university the only comprehensive university in the state with dual-campus status. Additionally, Southern Miss has multiple teaching sites that include Stennis Space Center, Jackson County, Keesler Air Force Base, J.L.Scott Aquarium, Gulf Coast Research Lab, and Pontlevoy, France.
The current president, Dr. Martha D. Saunders was elected as the first female president of the university in May 2007.
The earliest nickname for the university's athletic teams was Tigers, but early teams were also referred to as Normalties. Then, in 1924, our teams' name was changed to Yellow Jackets.
When the college was renamed Mississippi Southern College in 1940, a name change for the athletic teams
was fitting. In April 1940, the student body voted to name the teams Confederates. The teams were called
the Confederates during fall 1940 and spring 1941. In September 1941, Confederates was dropped, and the
teams were named Southerners.
Several years later, in 1953, General Nat (for Gen. Nathan Bedford Forrest) was approved as the Southerners' mascot.
In 1972, alumni, faculty, students, and staff were asked to submit new names for the athletic teams, and an ad hoc committee appointed by the Alumni Association voted on the submissions. Our present mascot, the Golden Eagles, was chosen as the athletic teams' name.
Golden Eagles was chosen over Raiders, War Lords, Timber Wolves and Southerners.
Originally called “Southern to the Top!” the university’s fight song was penned in 1955 by Robert Hays, assistant
director of The Pride of Mississippi Marching Band. Hays wrote the song as a closer for the first act of “Hey Daze,” a
three-act musical based upon student life at Mississippi Southern College. The song became so popular that it has
been echoed at athletic contests for more than four decades. The university’s fight song was recently renamed
“Southern Miss to the Top!” to reflect the university’s popular nickname, Southern Miss.
Southern Mississippi to the top! To the top,
So lift your voices high, show them the reason why,
That Southern spirit never will stop.
Fight! Fight! Fight!
Southern Mississippi all the way, banners high
And we will Fight! Fight! Fight! to victory,
Hear our battle cry!
In 1941, when Southern Miss was known as Mississippi Southern College, Yvonne Hamilton, ’43, and Clara Davenport,
’42, wrote thelyrics for the school’s alma mater. The conductor’s score was arranged by Mary Leila Gardner. In 1963,
the alma mater was retooled, with changes made to the first verse and Luigi Zaninelli arranging the current score. The
alma mater is as follows:
We sing to thee, our Alma Mater,
USM thy praises be.
Southern mem’ries we shall cherish
Loyalty we pledge to thee.
Spacious skies and land of sunshine,
Verdant trees and shelt’ring walls.
Now our hearts left ev-er to thee
As we praise thy hallowed halls.
Oh give us courage to go forward to our tasks,
And let us be:
Men of trust for thy name’s keeping,
USM we hallow thee.
And now we pledge thee by our honor,
Steadfast love and loyalty.
Working ever for thy glory,
USM thy glory be.
FRIDAY NIGHT AT THE FOUNTAIN...
THE SOUTHERN MISS PEP RALLY
Friday Night at the Fountain…the Southern Miss pep rally, designed with considerable student input, has rapidly evolved
into a tradition rich celebration that takes place on campus at the fountain in front of the Aubrey K. Lucas Administration Building. The event encompasses pep rally activities from past years of Southern Miss spirit and organizes them into a consistent happening at a permanent and highly visible
location the evening prior to home-game festivities. Friday Night at the Fountain was designed to bring the thrill and excitement of tailgating and game-day activities on Saturday into game week and has been enhanced by the
traditional Friday evening activities among student groups associated with The District. The festivity includes The
Pride of Mississippi Marching Band, the Dixie Darlings, the Southern Miss cheerleaders, Seymour, the Southern Misses, coaches, players, and an occasional fireworks display.
The historical district, simply known as The District, has acted as a gathering place for Southern Miss students and
alumni since the founding of the university. The area offers visitors an opportunity to take a walk in the rose garden
during the day, to see the illuminated dome at night, to enjoy the black-eyed Susans in spring, and to participate in
the Eagle Walk in fall. But The District is more than pathways and gardens. This historic part of campus is also a
tangible reminder of Southern Miss’ heritage. It is where one can go to most closely feel the spirit of the university;
it is a builder of loyalty and admiration. During the football season, The District becomes a hotbed of activity as
students, alumni, and friends of Southern Miss gather to tailgate before each home game.
THE EAGLE WALK
An unrivaled parade, a march into war, Eagle Walk is a celebration of the spirit of South Mississippi, the Gulf South
and the university. On game day at Southern Miss, a cannon is shot and a walk is made from The District to The Rock.
The Pride of Mississippi Marching Band strikes up “Southern Miss to the Top!” as thousands cheer their Golden Eagles to victory.
THE PAINTING OF THE EAGLE WALK
Before the first home football game of each year, the freshman class gathers to leave its signature on the university
by giving the Eagle Walk a fresh coat of gold paint. This time-honored tradition transforms Eagle Walk Drive into a street
of gold. The painting of the Eagle Walk is often one’s first memory, one’s first significant mark, and one’s first contribution to the university. | 1 | 5 |
<urn:uuid:91e8cc63-3d9d-4c48-b312-cbe6e2ce969a> | Impact of Technology on Work and Jobs in the Printing Industry Implications for Vocational Curriculum
University of Minnesota
In their text, Curriculum Development in Vocational and Technical Education, Finch and Crunkilton (1993) argue that a fundamental characteristic of the vocational and technical curriculum is that it must be "responsive to technological changes" in society (p. 69). They note that, unlike times past, work is no longer static. Thus:
The contemporary vocational curriculum must be responsive to a constantly changing world of work. New developments in various fields should be incorporated into the curriculum so that graduates can compete for jobs and, once they have jobs, achieve their greatest potential. (p. 16)
To assure the responsiveness of their curricula, vocational curriculum writers have typically resorted to strategies such as task analyses, industry advisory committees, and tracer studies. However, these strategies may no longer be sufficient in light of the pace of technological change in the workplace.
In his essay on the problems, politics, and possibilities of the vocational curriculum, Gregson (1996) encourages vocationalists to step back from the technocratic approach to curriculum development and to become more concerned with context. He suggests that the vocational curriculum ought to assume a transformative character. It should address the knotty issues of class, power, and control. To get at the root of the issues, first-hand connection with actual workplaces is required. Vocational curriculum developers must avoid prescribing from a distance. Distance tends to mask attendant contextual factors, while narrowing the focus to technical content.
Lewis and Konare (1993) conducted a study of the labor market information needs of technical and vocational college personnel in Minnesota and Wisconsin. They found that the type of information deemed to be the highest priority was that which related to the changing workplace, which occupations were becoming obsolete, how the educational requirements of jobs were changing, and how technology was affecting jobs. The authors concluded that traditional methods of gathering labor market information should be augmented by information derived through "first-hand, systematic, and continuing probing and documenting of labor market events" (p. 43). They further suggested that vocational curriculum developers could benefit from assuming an ethnographic stance, including spending time in actual workplaces, observing and gathering data. This approach would assure deeper understanding of the issues than would otherwise be possible, allowing the needs of employers and workers to be equally reflected in curricular considerations.
Consistent with Gregson's (1996) entreaty to extend beyond a restricted technical focus in vocational curriculum theory and practice, and Lewis and Konare's (1993) emphasis on direct observation, this study represents an attempt to understand technological change in an industry, through the experiences of those involved in such change. Meanings and understandings derived from such probing would provide lessons for the curriculum.
This study was designed to examine the shift in the pre-press aspect of the printing industry from manual to electronic stripping. Stripping involves color separation and image assembly on a page, with the image being some combination of graphics and text. In its traditional form, the work was a high form of craft, performed on light-tables, involving the layering of film negatives. The trade was transmitted through various combinations of vocational school preparation and on-the-job training. Today, traditional stripping is rapidly being replaced by so-called "front-end platforms" (FEPs) in the form of desktop publishing systems and color electronic pre-press systems (CEPS). In an industry study titled PRINTING 2000, it was explained that the main "drivers of technological change" are digitally structured information, FEPs, and telecommunications systems (Printing Industries of America, Undated, p. v-4). The prediction was that these new technological drivers would transform information from the traditional paper medium into electronic form, and that this information would be used in entirely new ways by the new media. With respect to the pre-press aspect of the industry, the authors predicted that:
By 1995 pre-press systems will be transmitting data (text and graphic images) and documents between hardware systems from different manufacturers. Consequently, pre-press functions and products will be linked to imagesetters and graphic artists at one end of the production process and to traditional and nontraditional printers at the other. Many of the design and creative functions that clients formerly handed over to vendors are now being pulled back by the client; in general, FEPs are enabling clients to retain control over the production of the camera copy or its electronic substitute. (Printing Industries of America, p. v-7)
Furthermore, the authors predicted that the new technologies will "dramatically affect the nature of the shop workforce and its relationship with management" (p. v-10). Because new skills will be needed in the front office and on the shop floor, the result will be "new jobs with new specifications, continued technical training, adjustments in salary ranges, and changes in work rules and relationships with management" (p. v-11). Mandel, Hauser, Carney, Gonzalez, and Bose (1993) examined the major printing technologies in the context of the global economy and set forth likely futures for the North American industry. They contend that the changes wrought by technology extend well beyond production.
The printing industry is among many industries that have been forced to deal with the impact of technological change. This is an industry that is centuries old, with a venerable craft tradition. Today many of the craft aspects are in retreat. In a study of the industry over the period from 1931-1978, Wallace and Kalleberg (1982) found a clear erosion of craft skills in the composing room. Based on this work, they predicted the total elimination of many printing jobs in the future. Subsequently, Kalleberg, Wallace, Loscocco, Leicht, and Ehm (1987) documented a deskilling trend in the composing, platemaking, and pressroom operations in the newspaper segment of the industry. They wrote that "many composing room workers are now relegated to 'paste-up' jobs" (p. 56), and that the new technologies had led to a "drastic reduction in the number of composing room workers" (p. 57).
Technological change is of critical interest to vocational educators. As indicated in PRINTING 2000, the new technologies are rapidly transforming much of pre-press work from blue collar to white collar. The most obvious change is the need for computer literacy. In the case of the Dutch printing industry, Hovels and Berg (1994) have shown that it is vital for vocational institutions to be informed about changes in the industry and to be flexible in their conception of training, the range of courses they are prepared to offer, and the structure of training. Beyond mere responsiveness to change, it is important for vocationalists to understand the interplay of skill, power, and control in the workplaces that are being transformed by technology.
The classic explanation of technological change in workplaces is embedded in labor process theory which, as articulated by Braverman (1974), contends that technology is introduced by management in its bid to separate the conception of work from its execution. With the introduction of technology, the subjectivity of craft is replaced by the predictability of the machine. Discretionary aspects of work are diminished. Workers lose shop floor control and become deskilled. As Smith (1994) notes in a retrospective on Braverman's work, deskilling was predicated on management's constant quest to learn as much as possible about how workers performed jobs, forever seeking "to appropriate their knowledge and to diminish the space in which (they) could maintain that knowledge" (p. 45). Indeed, as one looks at technological change in the printing industry, much of what used to be the craft knowledge of pre-press workers is now embedded in software.
However, just as deskilling (or downgrading) is a likely consequence of the introduction of technology, upgrading is also possible as new complex skills requiring high levels of education and training materialize (Hirschhorn, 1984; Zuboff, 1988). Thus, Spenner (1985) speaks not only of upgrading and downgrading, but also of mixed effects. Others speak of "contingent effects" and of "functional flexibility" (see Form, 1987; Gallie, 1991; McLoughlin & Clark, 1994; Milkman & Pullman, 1991). At the extreme, the introduction of technology leads to the displacement of labor (Freeman & Soete, 1994; Rifkin, 1995; Wallace, 1989). A more complete examination of these theoretical considerations can be found in Lewis (1996a).
Problem and Purpose
Technology is transforming work in the broad front of occupations for which vocational institutions prepare their clients. The problem, however, was that technological change remains a relatively unexplored area of inquiry in vocational education research and it barely informs curriculum theorizing in the field. The purpose of this study was to explore the phenomenon of technological change in a particular industry, printing. It is hoped that this research will serve to illuminate the curricular problem that vocational institutions confront as they seek to respond to changes in technology, work, and jobs.
Workers and managers from six printing firms in a midwestern state were interviewed on the theme of new technology in pre-press work. These interviews focused on comparisons between contemporary electronic stripping and more traditional stripping. The companies examined in this study were selected to represent a range of transformation stages from traditional to electronic stripping. While the companies had technological change in common, they differed in a number of ways, notably size, product, age, and presence of a union. The theoretical framework of the study was informed substantially by labor process theory.
Interviews were conducted in the spring of 1995. To understand the problem from the vantage point of vocational institutions, printing programs in four such institutions were visited and observed prior to data collection. Informal conversations were held with printing instructors at all four colleges. At one of the colleges, taped interviews were conducted with two of the instructors. At a second college, the researcher attended three printing program advisory committee meetings focusing on how the curriculum should respond to technological change in pre-press.
To obtain a union perspective, the vice-president of a local graphic communications union was interviewed. Since two of the companies in the study were unionized, union insights could also be gleaned through formal and informal interviews in these settings. A total of 48 people were formally interviewed.
Each of the six companies was at a different stage of introducing technology to its pre-press work. However, all were still learning the new medium, still in the process of transformation. Case methodology (Yin, 1989) was employed to assess the impact of the introduction of technology into the company, on jobs, work, and workers. Instrumentation for the study consisted of a semi-structured interview protocol, which varied in nature according to the status of the interviewee (e.g., manager, supervisor, union steward, converted traditional stripper, or desktop publisher) (see Form, Kaufman, Parcel, & Wallace, 1988). Managers provided a corporate perspective on technological change; union people injected insight from the perspective of organized labor; and all workers contributed insight into the different ways in which the technology had affected their lives and their jobs. Converted traditional strippers (to electronics) could provide contrast between the old and the new. Traditional workers who continued to work at the light table could reflect upon how the old job had changed.
Skill was central to the inquiry and the interview protocol was accordingly focused. Consistent with recent thinking (Attewell, 1990; Darrah, 1994; Spenner, 1990), skill was viewed as a nuanced construct. On one hand, it could embody substantive complexity or, on the other hand, autonomy-control (Spenner, 1985). In keeping with a schema set forth by Carnevale, Gainer, and Meltzer (1988), skill was also viewed in terms of workplace basics, practicing teamwork, learning how to learn, and knowing the three Rs (reading, writing, and arithmetic).
The primary method of gathering data was the taped interview. This was supplemented by informal conversations and on-site observations of people at work. The researcher gained permission to schedule and conduct private interviews with an agreed-upon cross-section of workers, supervisors, and managers. Interviews lasted for approximately one hour.
The results are organized by within-company observations. They are followed by synthesized discussions of transcending issues and themes which are, in turn, played against reactions to technological change observed at the technical colleges (Miles & Huberman, 1994). As each company's experience is discussed, important stakeholders, managers, supervisors, and workers, are given voice. Specifically, the transition from traditional stripping to electronic stripping (as viewed by these stakeholders) is explored.
Company A, a subsidiary of a large printing conglomerate, was established in 1848. At the time of the study, this unionized company, which specializes in catalog printing, had a workforce of 700. The new electronic pre-press technology was being introduced in a collaborative way. The company agreed that it would transition into electronic pre-press by providing necessary retraining for its traditional pre-press workers on a phased basis. Customers had prodded the company toward electronics by dramatically increasing the incidence with which they were submitting work on diskettes. At the time of the study, retraining was underway. Workers were being moved into electronics in waves on a seniority basis. They were at various stages, from those who had completed initial training to those waiting in line.
A manager. A manager in this company explained the decision to convert existing workers (rather than hire outside workers) in moral terms. Management did not wish "to end up with 55-year-old strippers (for whom) there is no work. We think that would be a real tragedy." While some traditional workers would have difficulties, it seemed that workers would overcome these through a team approach to solving problems. He explained, "We've hired a supervisor to hold it together." The company was striving to preserve the jobs of the senior workers in the transition process to electronic pre-press.
A supervisor. The supervisor hired to establish the electronic pre-press department expressed reservations about the approach of shifting people to electronics on the basis of seniority. He pointed out that in cases where workers had 5-10 years before their retirement, they would choose to learn at their own pace. The department would grow faster if the company selected and invested in "good qualified people." He was not saying, however, that traditional pre-press workers were uniformly a "bad fit" to these new innovations. Indeed, he felt that the company needed people with "good printing backgrounds" since "we can teach computers (to somebody) easier than you could teach printing." He had been a converted traditional printer himself, and consequently worried about the high cost of converting at least some workers. The belief that printing, more than computing, was the firmer base for electronic pre-press workers was a recurring theme throughout the interviews.
A worker. Samson, a journeyman stripper, spoke of seeing "the writing on the wall" with respect to technology and becoming resigned to converting to the new electronic way. He was among those who had already been transitioned over to electronics. His reflections conveyed how wrenching an experience it had been for some workers. In his words:
I also was basically ready for a change myself, Okay? And I wanted to (try) this electronic approach. But I had a lot of apprehension about it, because it was something that I hadn't done before. Also someone at my age, I was kind of a little bit concerned about that...it was an unknown area. It was like jumping off a cliff into a fog and not knowing how far you are dropping. I almost gave it up, because the stress and anxiety of trying to learn so much information just burned my brain. I would have headaches. I couldn't sleep at night. I would wake up dreaming about the actual jobs. It was pretty much hell there for a good six months.
He continued, comparing the new electronic version of the job with the traditional processes. He felt some loss. The computer was now in control. He explained:
When you were a stripper you were in control because you had your hands on a piece of physical object, a flat, a film, a brush, a knife, you were in hands-on control. Now, we go in there and you punch in what you feel (are) the right numbers, and the computer is doing the work. And when it doesn't come out, you don't know. You're helpless.
Samson's view was that this control can never be regained by the worker. However, when some reacted to the computer with resignation and dread, others found it to be a new source of intellectual life. One such person was Pete whose view was that:
For me, I feel it's a challenge. I enjoy it. Every day that I come to work there is something new. The thing is with electronics, you have to know more than one way of achieving your end. You have to be able to, you know, problem-solve.
Company B was established in 1949. At the time of the study it had a unionized workforce of 105 employees. Like Company A, it had recently made the move to electronic pre-press. Over a two-year period it had doubled its capability to handle electronic jobs. Unlike at Company A, many traditional strippers had been fired. Company B did not have the same degree of cooperation with the union, or commitment to traditional workers that Company A had.
A manager. The production manager in this company had been brought in to spearhead the shift to electronics. His approach to personnel was a mixture of layoffs, hiring, and in-house training. He explained:
We have to bring in some of the sharper people from the outside who've been exposed to the modern electronics...who are not afraid of computers and who are very facile with them. But they don't have the depth of knowledge and tradition that we need. And then we are taking a couple of our traditional people who are really at very high wage rates and we're bringing them in to work along side these. So we are getting a mix to try to get that traditional knowledge and some of the patience and focus along with the high energy, fast-thinking of the younger worker.
The manager felt that vocational institutions had an important role to play in preparing workers for electronic pre-press environments, but pointed out that there would be a big gap between the trained entry-level worker and the necessary elements to "tread water" in the company. The company strategy here was to hire vocational graduates when they had two to three years of post-training experience. He felt that a good strategy for vocational institutions would be to partner with industry groups to help define the needs of companies. Internships in printing companies would be a worthwhile aspect of vocational training programs. The imperative for partnership between vocational institutions and firms to deal with technological change was a recurring theme.
A worker. Dave, one of the traditional strippers in this company, spoke of how the job had changed. He reminisced about an earlier time "when someone would hand me something and not have to explain it to me or belittle me in any way or make me feel like I am somehow inferior, that they would come to me and say, 'here, we trust you'." There was an editorial tone here. This worker spoke of pride in work, of wanting to put out a good quality product "because, you know, that's got my name on it."
But traditional stripping had now lost its lustre. There was diminished complexity, little room for creativity. Much of the work had become routine. He noted:
We are now doing a lot of computer repair, you know, work that comes from the computer, we have to repair it. It's cheaper to have us do it on the table than it is to run it through the computer, or they are so backed up in the output devices they would rather us fix it.
Dave continued, "I see my job disappearing because the computers will eventually take up more of the work. The customer's no longer creating the artboard, he's creating the computer file. That's why I am currently taking some classes in QuarkXpress."
One reservation he had about needing to be retrained was that he would, in effect, be restarting his career. He stated, "The only problem then is that I go down to the bottom of the heap. The training I gained over the years is only of supplemental help." Another problem was uncertainty as to whether he would be able to assimilate new training, or whether he would like working with computers. Dave felt that the computer was bringing about the gradual extinction of craft. Many aspects of craft were now being lost. He lamented that "we used to have a lot more control over how things fit together." Correcting work that comes off a computer is "not challenging" and "more of a nuisance."
A shop steward. As indicated above, Company B was a unionized company, but the union was relatively powerless with regard to the introduction of computers. A shop steward, Denzil, explained that in the past the unions could control training. Unions could run their own printing schools, utilizing their own senior members as tutors. But with the advent of desktop publishing and its varying software, the unions could no longer keep up. They were losing their grasp. They had to spend their time "trying to hold ground." These sentiments were reiterated when one spoke with the union vice-president. Unions in the industry were seen to be in survival mode.
Company C was established in 1990 and had a workforce of 80 employees. It was a "color-house," that is, its product was film, not printed products. It was non-union. A plant tour quickly revealed that the operations were almost completely converted to electronics. The shear density of computers and computer-related equipment was impressive. This seemed to be a high-tech world, with young workers sitting in front of screens, creating images. It was a far distance from the traditional composing room. Many of the pre-press workers were college-educated, and typically had been hired after completing internships in the plant. The senior manager emphasized that workers had to be highly educated. Traditional pre-press workers had to change their ways, he opined. They had to learn the new mode.
Workers. One worker (Jake) had been brought aboard by the manager to help rationalize the stripping department. His experience had been in traditional stripping, but he had taken short seminars in electronic applications as well. As electronics expanded in the plant, he was trained on a piece of the new equipment. He explained that he had been on that piece of equipment for two years. No longer supervising people, he was still learning how to use the equipment. He considered the move to be "the best thing I have ever done." He felt valuable in the electronic shop because co-workers were always asking printing-related questions. They were drawing upon his printing knowledge while at the same time, he was still attempting to understand how to see things from the vantage point of computers.
Jake felt that his period of training on the computers, three weeks, had been too short. He and other traditional strippers had been trained and then "thrown into a production environment." This was quite unlike training in the old days. He now had to take risks in order to learn. Jake felt that the pride that was once evident under traditional conditions was now missing. He used to receive printed samples of completed jobs. Now there was disconnection between his efforts and the finished product.
Tom, a traditional stripper who had, to that point, deliberately resisted conversion to electronics, felt that the job had changed "quite a bit." He lamented, "what we used to do and what we do today (are) almost two different things." Echoing the observations of other veteran workers, he observed that the traditional job now consisted of:
easy type corrections, just easier stuff. And the way we go about doing it, we go pretty fast, and it's not the right way of doing it, but it gets the job done right away. Technically it's not the right way of doing things.
Traditional stripping had become a mere support function. As to what had been lost, Tom lamented, "I would think almost everything is lost, really." It was possible now, he mused, to take somebody off the street, put them in front of a computer and show them what to do "and they can do it but they wouldn't know why." They would not know printing. They would not understand standard printing processes such as "trapping," or when to spread and choke, he opined. Tom was ambivalent. He saw the value of the technology, but lamented the decline of craft:
I guess I'm for the technology, but it takes stuff away from the individual. The machine does it much nicer and better. I mean it's unbelievable what the stuff can do.
This worker's reaction raised the question of the human cost of technological change. When competence is yielded to the machine, is there not a psychic cost?
Company D, established in 1967, had a workforce of 125 employees. The company was halfway converted to electronics. The CEO explained that the conversion to electronics was forced, noting that "every other decision I've ever made I was able to control my own feelings and destiny."
This CEO's point should be considered as it applies to Braverman's theory regarding the introduction of technology in workplaces. It may be that some companies are no longer able to be as deliberate in the process of converting to the new technologies as they might have been at earlier points of change. They might find themselves being swept along by the irresistible force of technological determinism.
Company D afforded traditional strippers the opportunity to be retrained. According to the CEO, the conversion had thus far been semi-successful. A supervisor had been brought in to lead the company's overall conversion to electronics. She explained that her job description issued from the company was all-encompassing. She was to chart a technological course for them. The company had a pre-press department of 11 people, two of whom had been in electronic pre-press when she arrived. Two more had been converted to electronic stripping since her arrival. The company's productivity in the pre-press area had already doubled because of these changes.
Converted strippers. Conversations with two of the converted strippers revealed that they were positive about their new roles. Len had been with the company for 25 years. He indicated that the company was late in moving to electronics pre-press. When it finally decided to move, nobody knew where to start, so they hired a consultant in desktop publishing to train two veteran workers, including himself. Training occurred two days a week over a six-month period. Beyond the formal training, he indicated that he and the second worker had begun to work on actual jobs at their own initiative. They made mistakes, but they had learned. Len bought his own computer, took the manual home, and worked on it for "hours and hours and hours." There was pressure not to fail, not to cause the company to have second thoughts about hiring traditional workers.
When asked about traditional pre-press, he became nostalgic, indicating:
I still go back to the (light table) when they need it. I still have to go back to it. You feel comfortable there for one thing. You feel comfortable there but it's deep down inside. We just had this discussion this morning. It's deep down inside. If you do good work on the light table, you're going to do good work on the computer.
In his view, the new electronics procedures were very challenging, when one does something right.
A supervisor. One supervisor expressed candid fears and apprehensions about her lack of electronic knowledge. She confided, "They (workers) can understand me, but I don't always understand them. With the computer end of it, you know, I don't always know what is possible, or what's not possible..." She felt that there was a need to go back to school.
In her view, the technology did not respect people. At every level of the workplace hierarchy, people were forced to come to terms with their inadequacies as well as the need to upgrade their competence. Indeed, whole companies had to confront a learning curve. They had to re-learn their business. Not all would survive the effort.
Company E was established in 1977, with a workforce of 238 employees. The company produces deck card packs, a specialized niche in the printing industry. This company too was in the early stages of introducing computers into its pre-press operations. The pressure toward the new technologies was not as strong in this market niche, as in others, but that could change. The company wanted to be ahead of the curve. A manager explained that they were "dragging (their) customers to the digital age."
In expanding further into electronics, this company also had decided to attempt to convert some of its traditional strippers. Ten workers were chosen for such training based on computer literacy rather than on seniority. The chosen 10 were sent to a customized training course in electronics stripping which was administered by a technical college.
One of the selected workers spoke of the pressure to learn quickly. There was competition. "The faster we learn, the better we learn, the better our chances with the company. Because we are afraid to be left behind in the dust." A critique of the customized vocational classes was that they did not offer sample jobs from actual workplaces. Another critique was that the training was too narrowly focused.
Workers. The chosen workers felt privileged. However, they were also aware of some animosity from those who had not been selected. One traditional stripper in training felt that electronics had enhanced his work. He could do things much faster. He spoke of tensions engendered by the fact that seniority was not a factor in selection for retraining.
A second worker, though a veteran stripper, had an open mind about the new realities. His view was that, while knowledge of traditional stripping helped in the electronic stripping room, "it's always nice to get a fresh open mind of somebody coming in the ranks of printing...that doesn't have the mind filled with all the old methods." Traditional knowledge was knowledge to fall back upon. He revealed, though, that learning the computer had been traumatic.
At first I got a pit in my stomach and a lump in my throat because every day coming to work I used to feel completely at ease. I knew all areas of printing. I'd done this for 20 years. It's an uncomfortable situation at first. It took a couple of months, let's say. To think here I am at 50 and I'm completely turning around. I've got to start over at scratch with ABCs that took all these years to accomplish and now I'm going right back over. After a while I've lost that lump in my throat...I'm feeling it isn't really so hard. It's something I can tackle and accomplish.
Dana, a traditional stripper, who was not among the chosen 10, expressed the view that "challenge is going to go down the hill" the more the computer is involved. Stripping used to be challenging. There was complexity. "To me it's a puzzle. That's why I think computers are taking the fun out of it." "Puzzle" was one of the metaphors that was frequently employed throughout the interviews to characterize the complexity of traditional stripping.
Company F was founded in 1907, making it one of the oldest printing establishments in the state. It had a workforce of 385 employees. Similar to some of the other companies described above, this company had also chosen to retrain selected traditional strippers. This was being done on a "phased" basis. Workers were required to wait for their turn.
Interviews with four workers revealed that getting into the electronics pre-press area was the goal of most pre-press workers, not just strippers. The company had developed a clear training plan for those selected for conversion.
Workers. Company F was in the process of converting to Macintosh computers. Much of the training had been on the job. One worker explained, "I'm running the computer and charging up to half my time for training time." He had been provided with the option of charging any or all of his time, depending on his output of film. Prior to this on-the-job training phase, the company had sent him to training courses focused on the major software applications. In his view, electronics was "much more challenging" than conventional stripping. Indeed, he found that traditional stripping had become boring. He indicated that "it was pretty much the same thing: you'd cut the masks, and you registered your film and it got to be monotonous." He also expressed the view that the electronic mode was becoming easier. Programs were becoming easier, yielding greater production efficiency. He continued that there was constant teamwork, much cooperation with co-workers, constant communication with two to three operators.
The new environment was fun. All conventional strippers, "except for some of the oldtimers who are ready to retire," wanted to be converted to electronics. Conventional stripping was in decline. He explained:
Years ago you needed strippers who had 15 to 20 years experience. Now anyone can come and lay down film and the Mac will put it out. So now the stripping experience is not really needed here.
A supervisor. Charles was the supervisor of "pre-flighting." Pre-flighting, an airline metaphor, involves deciphering the computer files submitted by customers, prior to being transformed into film. This is the first stage of the computerized process. It is a critical stage. He was taught electronic stripping by a co-worker, followed by company-sponsored courses and practice on actual jobs. The company leadership was comfortable "if it takes you eight hours to do a two-hour job when you're learning." Now he had a training role in the company, helping other operators learn electronic stripping.
Synthesis and Brief Reflection on College Responses
The companies in this study had all accepted technological change in pre-press as an inevitability and had taken steps to transform their operations. They were impelled by some combination of the deterministic push of the new technologies and the need to be competitive. Whether they were also driven, as Braverman (1974) suggests, by the desire to wrest control of the labor process from workers, by marginalizing craft, can be contested.
On the one hand, this process of transformation was disruptive. The companies had departed from their relatively safe traditions of printing to venture into the competitive new world of electronic communications. This was a volatile world of rapid depreciation of capital and rapid obsolescence of techniques. As for workers, craft expertise had come to matter little. Traditional stripping had been denuded of the artistry, discretion, and opportunities for autonomy and individuality that had once made it challenging. Because craft per se no longer mattered, a whole class of workers had found themselves stripped of the basis of their legitimacy on the shop floor. A casualty of the disruption was shop floor culture which, to a considerable degree, was based on a hierarchy of skill and experience; formerly, newcomers had been apprenticed into their roles by elders. Currently, expert strippers had once again become novices, forfeiting their shop floor status. They were grappling with the apprehension of having to learn a new job skill, particularly when the consequences of inability could be job loss. Supervisors found themselves suddenly vulnerable, not being technologically literate. Skill had now taken on new meaning on the shop floor, characterized by the rationality that digitization brings. Much of what was formerly considered to be skill now seemed to reside in computers and software.
On the other hand, the change process also seemed to bring new possibilities. The computer presented challenges, opportunities for renewal. Some converted traditional strippers seemed enthused and energized by their new roles. They liked the unpredictability of the electronic version of stripping, of not knowing what would "turn up" on customer files. They liked the opportunities for problem-solving.
Resolution of the skill controversy becomes problematic since both winners and losers exist on account of new technology. Does technological change in pre-press represent a classic case of deskilling, consistent with Braverman's (1974) theory? In terms of what traditional stripping had lost, workers could argue compellingly (and some did) that the old job had been deskilled. For those workers who still performed traditional stripping, the differences between the new job requirements and the expectations of the past were indeed stark. But for those traditional workers who had converted to electronics, whether their circumstances constituted deskilling was not clear-cut. Some workers, who enjoyed the new work, remained sentimental about the challenges of the old job. Much depended on individual perception.
As discussed in the company cases, training played an important role in the transformation to electronic stripping since the companies, in varying degrees, had opted to convert traditional workers. Much of this training was "just-in-time," conducted quickly, often on-site, and under authentic conditions (for a fuller discussion of how companies dealt with the problem of training, see Lewis, 1996b). Only one of the six companies, Company E, had reached out to the technical colleges as a source of training. This, however, must not be viewed as an indictment of the vocational curriculum. The companies needed training modes that best suited workers already on the job. Again, digitized pre-press had come suddenly. Since the technologies were new to the firms, they were new to the colleges as well. Just as companies had been compelled to transform themselves, so too were the colleges. Industry will always lead vocational institutions in technology, due in part, to the costs associated with upgrading educational facilities. However, given the frenetic pace of technological change, companies are adopting new technologies much more rapidly than technical/vocational colleges can respond. This gap must be reduced. Educational institutions must become more proactive if they wish to be relevant.
As indicated above, the initial stages of this study involved observing technical college printing programs and conversing with college personnel about technological change. A general impression formulated during this process was that the colleges were searching for ways to respond. All four programs visited offered desktop publishing and administrators were attempting to get deeper into electronic pre-press processes. These changes were necessitating the purchase of new equipment, faculty retraining, and curriculum revision. One of the perplexing questions for these institutions had to do with how much of the old curriculum, if any, to retain. Should there be complete capitulation to digitization? All four colleges remained substantially tooled for traditional pre-press, even as they were engaged in the process of curricular change.
One of the colleges, the state leader in printing, had made substantial capital investments in establishing electronic publishing and color pre-press programs. Even in this situation, the faculty remained ambivalent about the extent to which they should walk away from their traditional pre-press courses, laboratories, and equipment. They were seeking advice from the advisory committees that were serving their printing programs. A letter of invitation to advisory committee members set forth their questions bluntly: (a) What do you want us to teach? (b) How do you want us to teach it, manually or on computers? and (c) How much time should we devote to each topic?
At three advisory council meetings that the researcher attended, committee members reviewed course materials for the entire printing program and proclaimed that much of it was now obsolete and irrelevant. They made it clear that it was no longer necessary to offer most of the traditional pre-press curriculum, except for historical purposes, or where skills transcended both manual and electronic stripping. The light table was deemed to be substantially an anachronism.
One issue that preoccupied the committee members was how to balance the specific skill needs of particular firms with the general skills that a graduate would need to be marketable across firms. Among the questions discussed were: With how many kinds of graphic software should the graduate be familiar? Should he/she know only the Macintosh computer? One general principle that they were able to agree upon was that all graduates should know how to work on the dominant machine of the industry (the Macintosh) and be proficient with dominant software, such as QuarkXpress.
Implications for the Vocational Curriculum
Clearly, those vocational institutions that wish to prepare students for pre-press careers must transform their curricula to make them relevant. But along what lines should they do this? What general principles should they apply? By whose version of the new realities in the workplace should they be guided, given that workers and managers may hold quite different stakes?
A recurring argument heard among traditional workers in varying degrees was that many of the old craft principles could transfer to the new electronic medium. This argument was made in all of the companies as evidenced by their willingness to invest in the retraining of traditionals. If it is true that many traditional skills have transfer effects, then it appears reasonable to believe that old ways of teaching printing may not be entirely irrelevant. But it also suggests a need to seek out those principles and concepts that transcend printing media. There are questions here that extend beyond printing that should be subjected to additional analysis and research. Whether hands-on ways of knowing in any way enhance knowing in the realm of electronic representation is an interesting puzzle.
One difference between traditional craft and the new digitized forms is that while the former had remained relatively static, the latter is dynamic and constantly changing. The software continues to evolve, as do the machines. How then does the curriculum stay current? Is the solution merely to chase after the latest technology? Preparation for change has practical limits and challenges, such as limitation of cost. The age-old problem for vocational institutions is the inability to afford the capital equipment of industry.
One solution that could be used to address the dual problems associated with equipment cost and perpetual change is for the vocational curriculum to be extended to include internship or apprenticeship experiences in actual workplaces. To accomplish this, industries would need to become partners in the vocational enterprise in even more fundamental ways than they now do. Based on this study, one additional reason why partnerships of this type should be nurtured is that companies have shown themselves to be quite resourceful and flexible in conceiving creative ways of training their employees in the use of new technologies. It is also important to note that with industry as a partner, the problem of curricular relevance is substantially solved. Vocational curriculum developers are then able to focus on the challenge of identifying and addressing the enduring aspects of the core subject matter. The assumption here is that each field has its basic essence, that part of it that remains, while the rest of it changes (Schwab, 1962). This is the problem that confronted Hirsch (1988) as he contemplated what should be the core of cultural literacy; that is, content that everyone should know. How does one deal with transient knowledge? In an attempt to solve this problem, Gagel (1995) conceived a change model that allows for a center of "universal knowledge," with allowances for "elapsing knowledge" and "emerging knowledge." This is an interesting prospect that appears to anticipate the problem of constructing curriculum in the face of technological change.
Vocational institutions have an important role to play in preparing workers for technological change. A key assumption of this study has been that schools will be better equipped to play this role if the impact of technology upon workplaces (work and jobs) is deeply understood. The cases here have illustrated that technological change is complex, requiring a deep understanding of skill in its many meanings. Technocratic conceptions of skill and technological change are insufficient. Other critical considerations include: (a) political, relating to the power relations between management and workers; (b) sociological, relating to hierarchy and status on the shop floor; (c) psychological, in the realm of self esteem, where fear and apprehension prevail; and (d) economic, as companies take risks and workers find their jobs threatened. To the extent that these complexities are understood and considered, vocational curriculum development becomes more than a technocratic enterprise.
Lewis is Associate Professor in the Department of Vocational and Technical Education, University of Minnesota, St. Paul, Minnesota.
Attewell, P. (1990). What is skill? Work and Occupations, 17(4), 422-447.
Braverman, H. (1974). Labor and monopoly capital: The degradation of work in the twentieth century. New York: Monthly Press Review.
Carnevale, A. P., Gainer, L. J., & Meltzer, A. S. (1988). Workplace basics: The skills employers want. Washington, DC: U.S. Department of Labor and the American Society for Training and Development.
Darrah, C. (1994). Skill requirements at work. Work and Occupations, 21(1), 64-84.
Finch, C. R., & Crunkilton, J. R. (1993). Curriculum development in vocational and technical education: Planning, content and implementation. Boston, MA: Allyn and Bacon.
Form, W. (1987). On the degradation of skills. Annual Review of Sociology, 13, 29-47.
Form, W., Kaufman, R. L., Parcel, T. L., & Wallace, M. (1988). The impact of technology on work organization and work outcomes. In G. Farkas & P. England (Eds.), Industry, firms, and jobs: Sociological and economic approaches. New York: Plenum Press, pp. 303-328.
Freeman, C., & Soete, L. (1994). Work for all or mass unemployment? New York, Pinter Publishers.
Gagel, C. W. (1995). Technological literacy: A critical exposition and interpretation for the study of technology in the general curriculum. Unpublished doctoral dissertation, University of Minnesota, Twin Cities.
Gallie, D. (1991). Patterns of skill change: Upskilling, deskilling or the polarization of skills? Work, Employment & Society, 5(3), 319-351.
Gregson, J. A. (1996). Continuing the discourse: Problems, politics and possibilities of vocational curriculum. Journal of Vocational Education Research, 21(1), 35-64.
Hirsch, E. D. (1988). Cultural Literacy: What every American needs to know. New York: Vintage.
Hirschhorn, L. (1984). Beyond mechanization: Work and technology in a post-industrial age. Cambridge, MA: The MIT Press.
Hovels, B., & Berg, v. d. S. (1994). Responsiveness of vocational training in the Dutch Printing Industry. In W. J. Nijhof, & J. N. Struemer (Eds.). Flexibility in training and vocational education, 133-149, Utrecht, The Netherlands: LEMMA.
Kalleberg, A. L., Wallace, M., Loscocco, K. A., Leicht, K. T., & Ehm, H. (1987). The eclipse of craft: The changing face of labour in the Newspaper Industry, In D. B. Cornfield & R. Marshall (Eds.), Workers, managers, and technological change: Emerging patterns of relations, 47-71, New York: Plenum Press.
Lewis, T., & Konare, A. (1993). Labor market dispositions of technical college personnel in Minnesota and Wisconsin. Journal of Vocational Education Research, 18(3), 15-47.
Lewis, T. (1996a). Studying the impact of technology on work and jobs. Journal of Industrial Teacher Education, 33(3), 44-65.
Lewis, T. (1996b). Training and technological change: Case evidence from the printing industry. Performance Improvement Quarterly, 9(4), 38-57.
Mandel, T. F., Hauser, S. M., Carney, M. J., Gonzalez, P., & Bose, R. (1993). Bridging to a digital future (Report prepared for PRINTING 2000 Task Force). Menlo Park, CA: SRI International.
McLoughlin, I., & Clark, J. (1994). Technological change and work. Philadelphia: Open University Press.
Miles, M. & Huberman, A. M. (1994). Qualitative data analysis, Thousand Oaks. CA: SAGE Publications.
Milkman, R., & Pullman, C. (1991). Technological change in an auto assembly plant: The impact on workers' tasks and skills. Work and Occupations, 18(2), 123-147.
Printing Industries of America (Undated). PRINTING 2000. Unpublished final report. Author.
Rifkin, J. (1995). The end of work: The decline of the global labor force and the dawn of the post-market era. New York, NY: G. P. Putnam's Sons.
Schwab. J. J. (1962). The concept of the structure of a discipline. Educational Record, 43, 197-205.
Smith, V. (1994). Braverman's legacy: The labor process tradition at 20. Work and Occupations, 21(4), 403-421.
Spenner, K. I. (1985). The upgrading and downgrading of occupations: Issues, evidence, and implications for education. Review of Educational Research, 55(2), 125-154.
Spenner, K. I. (1990). Skill: Meanings, methods, and measures. Work and Occupations, 17(4), 399-421.
Wallace, M. (1989). Brave new workplace: Technology and work in the new economy. Work and Occupations, 16(4), 363-392.
Wallace, M., & Kalleberg, A. L. (1982). Industrial transformation and the decline of craft: The decomposition of skill in the printing industry, 1931-1978. American Sociological Review, 47(3), 307-324.
Yin, R. K. (1989). Case study research: Design and methods. Thousand Oaks, CA: SAGE Publications.
Zuboff, S. (1988). In the age of the smart machine. New York: Basic Books.
Reference Citation: Lewis, T (1996). Impact of technology on work and jobs in the printing industry, Implications for vocational curriculum. Journal of Industrial Teacher Education, 34(2), 7-28. | 1 | 4 |
<urn:uuid:17fa3979-e64f-4c6d-b4a7-c2b6c435e8ca> | It is also a fun experience. It’s hard not to resist the desire to honk at least once as you wind your way through the 1.1 mile-long tunnel .
Historically, access in and out of what would become Zion National Park was difficult for both Anasazi and Paiute Indians as well as for 19th century pioneers and early 20th century motorists.
Ever since Nephi Johnson, a young Mormon missionary working among the Virgin River Indians discovered Zion Canyon in September 1858, many people would push for a road into Zion Canyon and on to Mt. Carmel Junction, which connects to Highway 89.
Shortly after Leo A. Snow, a deputy U.S. surveyor from St. George made a report to the Secretary of Interior of a detailed survey of Southern Utah that included Zion Canyon, President Taft signed a proclamation creating Mukuntuweap National Monument on July 31, 1909, according to local historian J.L. Crawford’s unpublished manuscript An Abbreviated History of Zion National Park.
The park’s name was changed in 1919 to Zion National Park. During the 1920s, people began lobbying state of Utah officials to build a road leading out of Zion Canyon toward Mt. Carmel Junction. In 1923, Utah Chief Engineer Howard Means and B.J. Finch, a U.S. government engineer, were sent to determine if such a road could be built, according to author Donald Garate’s book, The Zion Tunnel; From Slickrock to Switchback .
They were introduced to Springdale livestockman John Winder who showed them where a road could go up Pine Creek Canyon. After surveying the route, they determined the road was feasible, but getting Congress to fund such a project would be a challenging task.
On Sept. 27, 1927, construction by Nevada Contracting Company of Fallon, Nev., began on the building of the Zion-Mt. Carmel Highway, according to Garate. The project was divided into four sections. Section No. 1 was the 3.6 miles of switchbacks between the Virgin River and the west entrance to the tunnel. Section No. 2 was the tunnel itself, and Section No. 3 was the roadway from the east tunnel entrance to the park boundary. Section No. 4 was outside the park linking to Mt. Carmel and would be paid for by the State of Utah. Raleigh-Lang Construction Company of Springville, Utah, was granted the contract for building that section of road.
Since it was impractical to start the tunnel at either end, tunnel work began inbetween, according to Dr. Dena S. Markoff of the Western Heritage Conservation, Inc., Arvada, Colo., who wrote The Dudes are Always Right: The Utah Parks Company in Zion National Park 1923-1972 for the Zion Natural History Association in September 1980.
“Crews erected a scaffold and started drilling in. When they reached the point where the tunnel was to be, as determined by the survey from the footpath, they began boring in either direction from that point, on the tunnel itself,” Markoff stated.
At the time of the awarding of the contract, Dr. L. I. Hewes, deputy chief engineer of the Bureau of Public Roads, was in charge of road construction in national parks in 11 western states. He referred to the Zion-Mount Carmel Tunnel as the biggest single job the Bureau had ever undertaken, as quoted in the Aug. 31, 1927 Deseret News and the Sept. 1, 1927 Salt Lake Tribune. The quote is also cited by Markoff in her publication.
Since ventilation would be a problem in the tunnel, five windows or galleries were planned. Eventually, a sixth gallery was included in the project, Garate states.
By Feb. 9, 1928, a rough preliminary road was constructed at the west portal of the tunnel. Meanwhile, retaining walls had to be built along much of the road to secure it to the mountain side, Garate states. Construction of the switchbacks leading to the tunnel resulted in two fatalities during the three years the route was under construction. Construction workers Mac McClain and Johnny Morrison were killed in separate incidents.
On Sunday, Sept. 16, 1928, a pilot crew drilled their way through the east entrance and by Oct. 20, drilling and blasting of the tunnel was completed, a task that took 11 months and 12 days, according to Garate.
“As soon as a temporary bridge was built across Pine Creek at the east entrance, power shovels and dump trucks were moved out through the tunnel,” Garate stated. “Work was begun building the road toward the Park’s east boundary to meet the Raleigh-Lang Company whose workers were nearing the boundary from the west… A short road tunnel and four water tunnels to carry water under the road were built. Many rock culverts and retaining walls also had to be created.”
By December 1929, the road was nearly completed, with the last rock work being completed by July 10, 1930. Formal dedication of the Zion-Mt. Carmel Tunnel took place on July 4, 1930.
“On that day,” says historian J.L. Crawford in his unpublished manuscript, A History of Zion, “the Zion Lodge was decked out in bunting in observance of Independence Day, but the people who jammed the lodge to capacity had come to witness the opening of the mile-long tunnel which would open up a new section of Zion and appreciably shorten the distance to neighboring towns and parks. John Winder, of all the pioneers, who had probably done the most to ‘talk up’ the project, as well as guide the engineers over the slick rock and up the crevices to do the surveying, lived to enjoy the convenience of a new highway to his ranch. “
Utah Gov. George H. Dern dedicated the highway in front of a crowd of over 1,000 people who included National Park Director Horace M. Albright, 15 governors, top state and national highway personnel, Utah state officials, a dozen Union Pacific Railroad officials, representatives of numerous state and national newspapers, LDS President Heber J. Grant and his counselor , Anthony W. Ivins, according to Markoff.
Crawford said the completion of the Zion-Mt. Carmel Highway was of special interest to the Utah Parks Company since it shortened the distance to Bryce Canyon National Park by nearly 70 miles and to Grand Canyon by 20 miles. It also eliminated two difficult and hazardous sections of road. For six years prior to the completion of the tunnel, the Union Pacific busses had to negotiate a very steep hill and dugway out of Rockville.
Angus M. Woodbury, a Zion National Park naturalist from 1925 to 1933, in his book, A History of Southern Utah and Its National Parks described the highway route in Zion Canyon as follows: “From the canyon floor the road turns to the east up Pine Creek Canyon and spirals upward on a four-mile roadway to a tunnel paralleling the face of the vertical cliffs for 5,613 feet. Five galleries cut from the tunnel to the canyon wall offer the motorist vantage points for viewing the awe-inspiring scenery. Construction within the National Park cost $2 million; from the Park to Mt. Carmel a state and federal project, also cut in great part from solid rock cost in excess of $500,000.”
Markoff states that “without the pressure exerted by the Utah Parks Company for improved roads linking southern Utah and northern Arizona parks, the Zion-Mount Carmel Highway might have taken another generation to build. Indeed, the project might never have been accomplished without the combination of needs and influence that existed in the 1920s. Ironically,” he says, “the highway promoted by the Utah Parks Company facilitated increased automobile traffic which eventually supplanted travel by rail to the parks.”
“The building of the new road and tunnel brought some side benefits to the area, according to Crawford. Besides providing needed employment, it was responsible for the commercial power line which also brought electricity to the communities along the Virgin River. “
While the tunnel is basically the same as it was upon its completion, because of the softness of the sandstone which it passes through, much reinforcing has been done and concrete ribs now give added support to the tunnel’s entire length, Garate states. Since the collapse of a sandstone pillar west of Gallery No. 3 in 1958, the tunnel is now monitored electronically 24 hours a day to warn of any other potential collapses. Visitors are also no longer allowed to pull off at the galleries to view the magnificent scenery.
Also, before 1989, large vehicles, including tour buses, motor homes and trailers, were involved in numerous accidents and near misses in the tunnel due to a large increase in volume of traffic and the size of vehicles passing through the tunnel . To ensure safer passage of vehicles, the National Park Service began traffic control escorts at the tunnel .
This service, for which a $15 tunnel permit fee is charged, was provided for over 27,874 oversized vehicles in calendar year 2011, according to a National Park Service website. | 1 | 2 |
<urn:uuid:65e9ceaa-287f-4772-a7c5-fd4c41201dc1> | Science Fair Project Encyclopedia
Intel's i960 (or 80960) was a RISC-based microprocessor design that became quite popular during the early 1990s as an embedded microcontroller, for some time likely the best-selling CPU in that field, pushing the AMD 29000 from that spot. In spite of its success, Intel formally dropped i960 marketing in the late 1990s as a side effect of a lawsuit with DEC, in which Intel received the rights to produce the StrongARM CPU.
The i960 design was started as a response to the failure of Intel's i432 design of the early 1980s. The i432 was intended to directly support high-level languages that supported tagged, protected, garbage-collected memory -- such as Ada and Lisp -- in hardware. Because of its instruction-set complexity, its multi-chip implementation, and other design flaws, the i432 was very slow in comparison to other processors of its time.
In 1984 Intel and Siemens started a joint project, ultimately called BiiN, to create a high-end fault-tolerant object-oriented computer system programmed entirely in Ada. Many of the original i432 team members joined this project, though a new lead architect was brought in from IBM, Glenford Myers. The intended market for the BiiN systems were high-reliability computer users such as banks, industrial systems and nuclear power plants, and the protected-memory concepts from the i432 influenced the design of the BiiN system.
To avoid the performance issues that plagued the i432, the central i960 instruction-set architecture was a RISC design, and the memory subsystem was made 33-bits wide -- for a 32-bit word and a "tag" bit to indicate protected memory. In many other ways the i960 followed the original Berkeley RISC design, notably in its use of register windows, an implementation-specific number of caches for the per-subroutine registers, allowing for fast routine calls. The competing Stanford University design, commercialized as MIPS did not use this system, relying on the compiler to generate optimal subroutine call and return code instead. Unlike the i386, but in common with most 32-bit designs, the i960 has a flat 32-bit memory space, with no memory segmentation. The i960 architecture also anticipated a superscalar implementation, with instructions being simultaneously dispatched to more than one unit within the processor.
The first 960 processors "taped-out" in October 1985 and were sent to manufacturing that month, with the first working chips arriving in late 1985 and early 1986. The BiiN effort eventually failed, due to market forces, and the 960MC was left without a use. Myers attempted to save the design by outlining several subsets of the full capability architecture created for the BiiN system. Myers tried to convince Intel management to market the i960 (then still known as the "P7") as a general-purpose processor, both in place of the Intel 80286 and i386 (which "taped-out the same month as the first 960), as well as the emerging RISC market for Unix systems, including a pitch to Steve Jobs's for use in the NeXT system. Competition within and outside of Intel came not only from the i386 camp, but also from the i860 processor, yet another RISC processor design emerging within Intel at the time.
Myers was unsuccessful at convincing Intel management to support the i960 as a general-purpose or Unix processor, but the chip found a ready market in early high-performance 32-bit embedded systems. The protected-memory architecture was considered proprietary to BiiN and wasn't mentioned in the product literature, leading many to wonder why the i960MC was so large and had so many pins labeled "no connect". A version of the RISC core without memory management or an FPU became the i960KA, and the RISC core with the FPU became the i960KB. The versions were, however, all identical internally -- only the labelling was different.
The "full" 960MC was never released for the non-military market, but the i960KA became successful as a low-cost 32-bit processor for the laser-printer market, as well as for early graphics terminals and other embedded applications. Its success paid for future generations. which removed the complex memory sub-system. The first pure RISC implementation was the i960CA, which used a newly-designed superscalar RISC core and added an unusual addressable on-chip cache. The i960CA is widely considered to have been the first single-chip superscalar RISC implementation. The C-series only included one ALU, but could dispatch an arithmetic instruction, a memory reference, and a branch instruction at the same time. Later, the i960CF included a floating-point unit, but continued to omit an MMU.
Intel attempted to bolster the i960 in the I/O device controller market with the I2O standard, but this had little success and the design work was eventually ended. By the mid-90's its price/performance ratio had fallen behind competing chips of more recent design, and Intel never produced a reduced power-consumption version that could be used in battery-powered systems.
In 1990 the i960 team was redirected to be the "second team" working in parallel on future i386 implementations -- specifically the P6 processor, which later became the Pentium Pro. The i960 project was sent to another, smaller development team, essentially ensuring its ultimate demise.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | 1 | 29 |
<urn:uuid:04ab412c-6944-44e8-99e1-ea15b885f896> | - published: 30 Apr 2013
- views: 577
- author: MerrionStreetNews
The Department of Finance has today (30 April 2013) published the Irish Stability Programme -- April 2013 Update. Member States of the European Union are req...
Irish: Éire, English: Ireland,
The Emerald Isle,
The Island of Saints and Scholars
Satellite photograph of Ireland. The Atlantic Ocean is to the west, the Celtic Sea is to the south and the Irish Sea is to the east.
|Location||Northern Europe or Western Europe|
|Area||81,638.1 km2 (31,520.65 sq mi)|
|Coastline||2,797 km (1,738 mi)|
|Highest elevation||1,041 m (3,415 ft)|
|Constituent Country||Northern Ireland|
|Population||6,380,661 (as of 2008)|
|Density||73.4 /km2 (190.1 /sq mi)|
|Ethnic groups||Irish, Ulster Scots, Irish Travellers[Note 1]|
Ireland (pronounced [ˈaɪrlənd] ( listen); Irish: Éire [ˈeːɾʲə] ( listen); Ulster Scots: Airlann or Airlan) is an island to the northwest of continental Europe. It is the third-largest island in Europe and the twentieth-largest island on Earth. To its east is the larger island of Great Britain, from which it is separated by the Irish Sea.
Politically, Ireland is divided between the Republic of Ireland, which covers just under five-sixths of the island, and Northern Ireland, a part of the United Kingdom, which covers the remainder and is located in the northeast of the island. The population of Ireland is approximately 6.4 million. Just under 4.6 million live in the Republic of Ireland and just under 1.8 million live in Northern Ireland.
Relatively low-lying mountains surrounding a central plain epitomise Ireland's geography with several navigable rivers extending inland. The island has lush vegetation, a product of its mild but changeable oceanic climate, which avoids extremes in temperature. Thick woodlands covered the island until the 17th century. Today, it is one of the most deforested areas in Europe. There are twenty-six extant mammal species native to Ireland.
A Norman invasion in the Middle Ages gave way to a Gaelic resurgence in the 13th century. Over sixty years of intermittent warfare in the 1500s led to English dominance after 1603. In the 1690s, a system of Protestant English rule was designed to materially disadvantage the Catholic majority and Protestant dissenters, and was extended during the 18th century. In 1801, Ireland became a part of the United Kingdom. A war of independence in the early 20th century led to the partition of the island, creating the Irish Free State, which became increasingly sovereign over the following decades. Northern Ireland remained a part of the United Kingdom and saw much civil unrest from the late 1960s until the 1990s. This subsided following a political agreement in 1998. In 1973, both parts of Ireland joined the European Economic Community.
Irish culture has had a significant influence on other cultures, particularly in the fields of literature and, to a lesser degree, science and education. A strong indigenous culture exists, as expressed for example through Gaelic games, Irish music and the Irish language, alongside mainstream Western culture, such as contemporary music and drama, and a culture shared in common with Great Britain, as expressed through sports such as soccer, rugby, horse racing, and golf, and the English language.
|History of Ireland|
This article is part of a series
|Timeline of Irish history|
|Peoples and polities|
|Lordship of Ireland|
|Kingdom of Ireland|
|United Kingdom of Great Britain and Ireland|
|Republic of Ireland · Northern Ireland|
|Battles · Clans · Kingdoms · States|
|Gaelic monarchs · British monarchs|
|Economic history · History of the Irish language|
Most of Ireland was covered with ice until the end of the last ice age over 9,000 years ago. Sea levels were lower and Ireland, like Great Britain, was part of continental Europe. Mesolithic stone age inhabitants arrived some time after 8,000 BC and agriculture followed with the Neolithic Age around 4,500 to 4,000 BC when sheep, goats, cattle and cereals were imported from the Iberian peninsula.
At the Céide Fields, preserved beneath a blanket of peat in present-day County Mayo, is an extensive field system, arguably the oldest in the world, dating from not long after this period. Consisting of small divisions separated by dry-stone walls, the fields were farmed for several centuries between 3,500 and 3,000 BC. Wheat and barley were the principal crops.
The Bronze Age – defined by the use of metal – began around 2,500 BC, with technology changing people's everyday lives during this period through innovations such as the wheel, harnessing oxen, weaving textiles, brewing alcohol, and skillful metalworking, which produced new weapons and tools, along with fine gold decoration and jewellery, such as brooches and torcs. According to John T. Koch and others, Ireland in the Late Bronze Age was part of a maritime trading-networked culture called the Atlantic Bronze Age that also included Britain, France, Spain and Portugal where Celtic languages developed.
The Iron Age in Ireland is traditionally associated with people known as the Celts. The Celts were commonly thought to have colonised Ireland in a series of invasions between the 8th and 1st centuries BC. The Gaels, the last wave of Celts, were said to have divided the island into five or more kingdoms after conquering it. However, some academics favour a theory that emphasises the diffusion of culture from overseas as opposed to a military colonisation. Finds such as Clonycavan Man are given as evidence for this theory.
The earliest written records of Ireland come from classical Greco-Roman geographers. Ptolemy in his Almagest refers to Ireland as Mikra Brettania (Lesser Britain), in contrast to the larger island, which he called Megale Brettania (Great Britain). In his later work, Geography, Ptolemy refers to Ireland as Iwernia and to Great Britain as Albion. These "new" names were likely to have been the Celtic names for the islands at the time. The earlier names, in contrast, were likely to have been coined before direct contact with local peoples was made.
The Romans would later refer to Ireland by this name too in its Latinised form, Hibernia, or Scotia. Ptolemy records sixteen tribes inhabiting every part of Ireland in 100 AD. The relationship between the Roman Empire and the tribes of ancient Ireland is unclear. However, a number of finds of Roman coins have been found, for example at New Grange.
Ireland continued as a patchwork of rival tribes but, beginning in the 7th century AD, a concept of national kingship gradually became articulated through the concept of a High King of Ireland. Medieval Irish literature portrays an almost unbroken sequence of High Kings stretching back thousands of years but modern historians believe the scheme was constructed in the 8th century to justify the status of powerful political groupings by projecting the origins of their rule into the remote past.
The High King was said to preside over the patchwork of provincial kingdoms that together formed Ireland. Each of these kingdoms had their own kings but were at least nominally subject to the High King. The High King was drawn from the ranks of the provincial kings and ruled also the royal kingdom of Meath, with a ceremonial capital at the Hill of Tara. The concept only became a political reality in the Viking Age and even then was not a consistent one. However, Ireland did have a unifying rule of law: the early written judicial system, the Brehon Laws, administered by a professional class of jurists known as the brehons.
The Chronicle of Ireland records that in 431 AD Bishop Palladius arrived in Ireland on a mission from Pope Celestine I to minister to the Irish "already believing in Christ." The same chronicle records that Saint Patrick, Ireland's best known patron saint, arrived the following year. There is continued debate over the missions of Palladius and Patrick but the consensus is that they both took place and that the older druid tradition collapsed in the face of the new religion. Irish Christian scholars excelled in the study of Latin and Greek learning and Christian theology. In the monastic culture that followed the Christianisation of Ireland, Latin and Greek learning was preserved in Ireland during the Early Middle Ages in contrast to elsewhere in Europe, where the Dark Ages followed the decline of the Roman Empire.
The arts of manuscript illumination, metalworking and sculpture flourished and produced treasures such as the Book of Kells, ornate jewellery and the many carved stone crosses that still dot the island today. A mission founded in 563 on Iona by the Irish monk Saint Columba began a tradition of Irish missionary work that spread Christianity and learning to Scotland, England and the Frankish Empire on Continental Europe after the fall of Rome. These missions continued until the late Middle Ages, establishing monasteries and centres of learning, producing scholars such as Sedulius Scottus and Johannes Eriugena and exerting much influence in Europe.
From the 9th century, waves of Viking raiders plundered Irish monasteries and towns. These raids added to a pattern of raiding and endemic warfare that was already deep-seated in Ireland. The Vikings also were involved in establishing most of the major coastal settlements in Ireland: Dublin, Limerick, Cork, Wexford, Waterford, and also Carlingford, Strangford, Annagassan, Arklow, Youghal, Lough Foyle and Lough Ree.
On May 1, 1169, an expedition of Cambro-Norman knights with an army of about six hundred landed at Bannow Strand in present-day County Wexford. It was led by Richard de Clare, called Strongbow due to his prowess as an archer. The invasion, which coincided with a period of renewed Norman expansion, was at the invitation of Dermot Mac Murrough, the king of Leinster.
In 1166, Mac Murrough had fled to Anjou, France following a war involving Tighearnán Ua Ruairc, of Breifne, and sought the assistance of the Angevin king, Henry II, in recapturing his kingdom. In 1171, Henry arrived in Ireland in order to review the general progress of the expedition. He wanted to re-exert royal authority over the invasion which was expanding beyond his control. Henry successfully re-imposed his authority over Strongbow and the Cambro-Norman warlords and persuaded many of the Irish kings to accept him as their overlord, an arrangement confirmed in the 1175 Treaty of Windsor.
The invasion was legitimised by the provisions of the Papal Bull Laudabiliter, issued by Adrian IV in 1155. The bull encouraged Henry to take control in Ireland in order to oversee the financial and administrative reorganisation of the Irish Church and its integration into the Roman Church system. Some restructuring had already begun at the ecclesiastical level following the Synod of Kells in 1152. There has been significant controversy regarding authenticity of Laudabiliter, and there is no general agreement as to whether the bull was genuine or a forgery.
In 1172, the new pope, Alexander III, further encouraged Henry to advance the integration of the Irish Church with Rome. Henry was authorised to impose a tithe of one penny per hearth as an annual contribution. This church levy, called Peter's Pence, is still extant in Ireland as a voluntary donation. In turn, Henry accepted the title of Lord of Ireland which Henry conferred on his younger son, John Lackland, in 1185. This defined the Irish state as the Lordship of Ireland. When Henry's successor died unexpectedly in 1199, John inherited the crown of England and retained the Lordship of Ireland.
Over the century that followed, Norman feudal law gradually replaced the Gaelic Brehon Law so that by the late 13th century the Norman-Irish had established a feudal system throughout much of Ireland. Norman settlements were characterised by the establishment of baronies, manors, towns and the seeds of the modern county system. A version of the Magna Carta (the Great Charter of Ireland), substituting Dublin for London and Irish Church for Church of England, was published in 1216 and the Parliament of Ireland was founded in 1297.
However, from the mid-14th century, after the Black Death, Norman settlements in Ireland went into a period of decline. The Norman rulers and the Gaelic Irish elites intermarried and the areas under Norman rule became Gaelicised. In some parts, a hybrid Hiberno-Norman culture emerged. In response, the Irish parliament passed the Statutes of Kilkenny in 1367. These were a set of laws designed to prevent the assimilation of the Normans into Irish society by requiring English subjects in Ireland to speak English, follow English customs and abide by English law. However, by the end of the 15th century central English authority in Ireland had all but disappeared and a renewed Irish culture and language, albeit with Norman influences, was dominant again. English Crown control remained relatively unshaken in an amorphous foothold around Dublin known as The Pale and under the provisions of Poynings' Law of 1494, the Irish Parliamentary legislation was subject to the approval of the English Parliament.
The title of King of Ireland was re-created in 1542 by Henry VIII, then King of England, of the Tudor dynasty. English rule of law was reinforced and expanded in Ireland during the latter part of the 16th century leading to the Tudor conquest of Ireland. A near complete conquest was achieved by the turn of the 17th century, following the Nine Years' War and the Flight of the Earls. This control was further consolidated during the wars and conflicts of the 17th century, which witnessed English and Scottish colonisation in the Plantations of Ireland, the Wars of the Three Kingdoms and the Williamite War. Irish losses during the Wars of the three Kingdoms (which, in Ireland, included the Irish Confederacy and the Cromwellian conquest of Ireland) are estimated to include 20,000 battlefield casualties. 200,000 civilians are estimated to have died as a result of a combination of war-related famine, displacement, guerilla activity and pestilence over the duration of the war. A further 50,000[Note 2] were sent to slavery in the West Indies. Some historians estimate that as much as half of the pre-war population of Ireland may have died as a result of the conflict.
The religious struggles of the 17th century left a deep sectarian division in Ireland. Religious allegiance now determined the perception in law of loyalty to the Irish King and Parliament. After the passing of the Test Act 1672, and with the victory of the forces of the dual monarchy of William and Mary over the Jacobites, Roman Catholics and nonconforming Protestant Dissenters were barred from sitting as members in the Irish Parliament. Under the emerging penal laws Irish Roman Catholics and Dissenters were increasingly deprived of various and sundry civil rights even to the ownership of hereditary property. Additional regressive punitive legislation followed 1703, 1709 and 1728. This completed a comprehensive systemic effort to materially disadvantage Roman Catholics and Protestant Dissenters, while enriching a new ruling class of Anglican conformists. The new Anglo-Irish ruling class became known as the Protestant Ascendancy.
An extraordinary climatic shock known as the "Great Frost" struck Ireland and the rest of Europe between December 1739 and September 1741, after a decade of relatively mild winters. The winters destroyed stored crops of potatoes and other staples and the poor summers severely damaged harvests. This resulted in the famine of 1740. An estimated 250,000 people (about one in eight of the population) died from the ensuing pestilence and disease. The Irish government halted export of corn and kept the army in quarters but did little more. Local gentry and charitable organisations provided relief but could do little to prevent the ensuing mortality.
In the aftermath of the famine, an increase in industrial production and a surge in trade brought a succession of construction booms. The population soared in the latter part of this century and the architectural legacy of Georgian Ireland was built. In 1782, Poynings' Law was repealed, giving Ireland virtual legislative independence from Great Britain for the first time since the Norman invasion. The British government, however, still retained the right to nominate the government of Ireland without the consent of the Irish parliament.
In 1798, members of the Protestant Dissenter tradition (mainly Presbyterian) made common cause with Roman Catholics in a republican rebellion inspired and led by the Society of United Irishmen, with the aim of creating an independent Ireland. Despite assistance from France the rebellion was put down by British and Irish government and yeomanry forces. In 1800, the British and Irish parliaments both passed Acts of Union that, with effect from 1 January 1801, merged the Kingdom of Ireland and the Kingdom of Great Britain to create a United Kingdom of Great Britain and Ireland.
The passage of the Act in the Irish Parliament was ultimately achieved with substantial majorities, having failed on the first attempt in 1799. According to contemporary documents and historical analysis, this was achieved through a considerable degree of bribery, with funding provided by the British Secret Service Office, and the awarding of peerages, places and honours to secure votes. Thus, Ireland became part of an extended United Kingdom, ruled directly by a united parliament at Westminster in London.
Aside from the development of the linen industry, Ireland was largely passed over by the industrial revolution, partly because it lacked coal and iron resources and partly because of the impact of the sudden union with the structurally superior economy of England, which saw Ireland as a source of agricultural produce and capital.
The Great Famine of the 1840s caused the deaths of one million Irish people and over a million more emigrated to escape it. By the end of the decade, half of all immigration to the United States was from Ireland. Mass emigration became deeply entrenched and the population continued to decline until the mid-20th century. Immediately prior to the famine the population was recorded as 8.2 million by the 1841 census. The population has never returned to this level since. The population continued to fall until 1961 and it was not until the 2006 census that the last county of Ireland (County Leitrim) to record a rise in population since 1841 did so.
The 19th and early 20th centuries saw the rise of modern Irish nationalism, primarily among the Roman Catholic population. The pre-eminent Irish political figure after the Union was Daniel O'Connell. He was elected as Member of Parliament for Ennis in a surprise result and despite being unable to take his seat as a Roman Catholic. O'Connell spearheaded a vigorous campaign that was taken up by the Prime Minister, the Irish-born soldier and statesman, the Duke of Wellington. Steering the Catholic Relief Bill through Parliament, aided by future prime minister Robert Peel, Wellington prevailed upon a reluctant George IV to sign the Bill and proclaim it into law. George's father had opposed the plan of the earlier Prime Minister, Pitt the Younger, to introduce such a bill following the Union of 1801, fearing Catholic Emancipation to be in conflict with the Act of Settlement 1701.
A subsequent campaign, led by O'Connell, for the repeal of the Act of Union failed. Later in the century, Charles Stewart Parnell and others campaigned for autonomy within the Union, or "Home Rule". Unionists, especially those located in Ulster, were strongly opposed to Home Rule, which they thought would be dominated by Catholic interests. After several attempts to pass a Home Rule bill through parliament, it looked certain that one would finally pass in 1914. To prevent this from happening, the Ulster Volunteers were formed in 1913 under the leadership of Edward Carson.
Their formation was followed in 1914 by the establishment of the Irish Volunteers, whose aim was to ensure that the Home Rule Bill was passed. The Act was passed but with the "temporary" exclusion of the six counties of Ulster that would become Northern Ireland. Before it could be implemented, however, the Act was suspended for the duration of the First World War. The Irish Volunteers split into two groups. The majority, approximately 175,000 in number, under John Redmond, took the name National Volunteers and supported Irish involvement in the war. A minority, approximately 13,000, retained the Irish Volunteers name, and opposed Ireland's involvement in the war.
The failed Easter Rising of 1916 was carried out by the latter group in alliance with a smaller socialist militia, the Irish Citizen Army. The British response, executing fifteen leaders of the Rising over a period of ten days and imprisoning or interning more than a thousand people, turned the mood of the country in favour of the rebels. The pro-independence republican party, Sinn Féin, received overwhelming endorsement in the general election of 1918, and in 1919 proclaimed an Irish Republic, setting up its own parliament (Dáil Éireann) and government. British authorities attempted to extinguish this challenge, sparking a guerilla war from 1919 to July 1921 which ended in a truce.
In 1921, the Anglo-Irish Treaty was concluded between the British Government and representatives of the First Dáil. It gave all of Ireland complete independence in their home affairs and practical independence for foreign policy. However, an oath of allegiance to the British Crown had to be exercised, and Northern Ireland was given an opt-out clause, which it exercised immediately as expected. Disagreements over these provisions led to a split in the nationalist movement and a subsequent civil war between the new government of the Irish Free State and those opposed to the treaty, led by Éamon de Valera. The civil war officially ended in May 1923 when de Valera issued a cease-fire order.
During its first decade the newly formed Irish Free State was governed by the victors of the civil war. When de Valera achieved power, he took advantage of the Statute of Westminster and political circumstances to build upon inroads to greater sovereignty made by the previous government. The oath was abolished and in 1937 a new constitution was adopted. This completed a process of gradual separation from the British Empire that governments had pursued since independence. However, it was not until 1949 that the state was declared, officially, to be the Republic of Ireland.
The state was neutral during World War II, but offered clandestine assistance to the Allies, particularly in the potential defence of Northern Ireland. Despite being neutral, approximately 50,000 volunteers from independent Ireland joined the British forces during the war, four being awarded Victoria Crosses.
German Intelligence was also active in Ireland, with both the Abwehr ([ˈapveːɐ̯], German for Defence; the German military intelligence service) and the SD (the Sicherheitsdienst, English: Security Service, the intelligence service of the SS) sending agents there. German intelligence operations effectively ended in September 1941 when police made arrests on the basis of surveillance carried out on the key diplomatic legations in Ireland, including that of the United States. To the authorities counterintelligence was a fundamental line of defence. With a regular army of only slightly over seven thousand men at the start of the war, and with limited supplies of modern weapons, the state would have had great difficulty in defending itself from invasion from either side of the conflict.
Large-scale emigration marked the 1950s and 1980s, but beginning in 1987 the economy improved, and the 1990s saw the beginning of substantial economic growth. This period of growth became known as the Celtic Tiger. The Republic's real GDP grew by an average of 9.6% per annum between 1995 and 1999, in which year the Republic joined the euro. In 2000 Ireland was the sixth-richest country in the world in terms of GDP per capita. Social changes followed quickly on the heels of economic prosperity, ranging from the 'modernisation' of the annual parade in Dublin to mark the principal national holiday of Saint Patrick's Day (17 March), to the decline in authority of the Catholic Church. The financial crisis of 2008–2010 dramatically ended this period of boom. GDP fell by 3% in 2008 and by 7.1% in 2009, the worst year since records began (although earnings by foreign-owned businesses continued to grow). The state has since experienced deep recession, with unemployment, which doubled during 2009, remaining above 14% in 2012.
Northern Ireland was created as a division of the United Kingdom by the Government of Ireland Act 1920 and until 1972 it was a self-governing jurisdiction within the United Kingdom with its own parliament and prime minister. Northern Ireland, as part of the United Kingdom, was not neutral during the Second World War and Belfast suffered four bombing raids in 1941. Conscription was not extended to Northern Ireland and roughly an equal number volunteered from Northern Ireland as volunteered from the south. One, James Joseph Magennis, received the Victoria Cross for valour.
Although Northern Ireland was largely spared the strife of the civil war, in decades that followed partition there were sporadic episodes of inter-communal violence. Nationalists, mainly Roman Catholic, wanted to unite Ireland as an independent republic, whereas unionists, mainly Protestant, wanted Northern Ireland to remain in the United Kingdom. The Protestant and Catholic communities in Northern Ireland voted largely along sectarian lines, meaning that the Government of Northern Ireland (elected by "first-past-the-post" from 1929) was controlled by the Ulster Unionist Party. Over time, the minority Catholic community felt increasingly alienated with further disaffection fueled by practices such as gerrymandering and discrimination in housing and employment.
In the late 1960s, nationalist grievances were aired publicly in mass civil rights protests, which were often confronted by loyalist counter-protests. The government's reaction to confrontations was seen to be one-sided and heavy-handed in favour of unionists. Law and order broke down as unrest and inter-communal violence increased. The Northern Ireland government requested the British Army to aid the police, who were exhausted after several nights of serious rioting. In 1969, the paramilitary Provisional IRA, which favoured the creation of a united Ireland, emerged from a split in the Irish Republican Army and began a campaign against what it called the "British occupation of the six counties".
Other groups, on both the unionist side and the nationalist side, participated in violence and a period known as the Troubles began. Over 3,600 deaths resulted over the subsequent three decades of conflict. Owing to the civil unrest during the Troubles, the British government suspended home rule in 1972 and imposed direct rule. There were several unsuccessful attempts to end the Troubles politically, such as the Sunningdale Agreement of 1973. In 1998, following a ceasefire by the Provisional IRA and multi-party talks, the Good Friday Agreement was concluded as a treaty between the United Kingdom and the Republic of Ireland, annexing the text agreed in the multi-party talks. The substance of the Agreement (formally referred to as the Belfast Agreement) was later endorsed by referendums in both parts of Ireland. The Agreement restored self-government to Northern Ireland on the basis of power-sharing in a regional Executive drawn from the major parties in a new Northern Ireland Assembly, with entrenched protections for the two main communities. The Executive is jointly headed by a First Minister and deputy First Minister drawn from the unionist and nationalist parties. Violence had decreased greatly after the Provisional IRA and loyalist ceasefires in 1994 and in 2005 the Provisional IRA announced the end of its armed campaign and an independent commission supervised its disarmament and that of other nationalist and unionist paramilitary organisations. The Assembly and power-sharing Executive were suspended several times but were restored again in 2007. In that year the British government officially ended its military support of the police in Northern Ireland (Operation Banner) and began withdrawing troops.
Since 1922, Ireland has been partitioned between two political entities:
The 1998 Belfast Agreement provides for political co-operation between the two jurisdictions through a number of institutions and bodies. The North/South Ministerial Council, established under the agreement, is an institution through which ministers from the Government of Ireland and the Northern Ireland Executive can formulate all-island policies in twelve "areas of co-operation" such as agriculture, the environment and transport. Six of these policy areas have associated all-island "implementation bodies." For example, food safety is managed by the Food Safety Promotion Board and Tourism Ireland markets the island as a whole.
Three major political parties, Sinn Féin, the Irish Green Party and, most recently, Fianna Fáil, are organised on an all-island basis. However, only the former two of these have contested elections and have held legislative seats in both jurisdictions. The two jurisdictions share transport, telecommunications, energy and water systems. With a few notable exceptions, the island is the main organisational unit for major religious, cultural and sporting organisations.
Despite the two jurisdictions using two distinct currencies (the Euro and Pound Sterling), a growing amount of commercial activity is carried out on an all-island basis. This has been facilitated by the two jurisdictions' shared membership of the European Union, and there have been calls from members of the business community and policymakers for the creation of an "all-island economy" to take advantage of economies of scale and boost competitiveness. One area in which the island already operates as a single market is electricity and there are plans for the creation of an all-island gas market.
For much of their existence electricity networks in the Republic of Ireland and Northern Ireland were entirely separate. Both networks were designed and constructed independently post partition. However, as a result of changes over recent years they are now connected with three interlinks and also connected through Great Britain to mainland Europe. The situation in Northern Ireland is complicated by the issue of private companies not supplying Northern Ireland Electricity (NIE) with enough power. In the Republic of Ireland, the ESB has failed to modernise its power stations and the availability of power plants has recently averaged only 66%, one of the worst such rates in Western Europe. EirGrid is building a HVDC transmission line between Ireland and Great Britain with a capacity of 500 MW, about 10% of Ireland's peak demand.
As with electricity, the natural gas distribution network is also now all-island, with a pipeline linking Gormanston, County Meath, and Ballyclare, County Antrim. Most of Ireland's gas comes through interconnectors between Twynholm in Scotland and Ballylumford, County Antrim and Loughshinny, County Dublin. A decreasing supply is coming from the Kinsale gas field off the County Cork coast and the Corrib Gas Field off the coast of County Mayo has yet to come on-line. The County Mayo field is facing some localised opposition over a controversial decision to refine the gas onshore.
There have been recent efforts in Ireland to use renewable energy such as wind power. Large wind farms are being constructed in coastal counties such as Cork, Donegal, Mayo and Antrim. The construction of wind farms has in some cases been delayed by opposition from local communities, some of whom consider the wind turbines to be unsightly. The Republic of Ireland is also hindered by an ageing network that was not designed to handle the varying availability of power that comes from wind farms. The ESB's Turlough Hill facility is the only power-storage facility in the state.
The island of Ireland is located in the north-west of Europe, between latitudes 51° and 56° N, and longitudes 11° and 5° W. It is separated from the neighbouring island of Great Britain by the Irish Sea and the North Channel, which has a width of 23 kilometres (14 mi) at its narrowest point. To the west is the northern Atlantic Ocean and to the south is the Celtic Sea, which lies between Ireland and Brittany, in France. Ireland and Great Britain, together with nearby islands, are known collectively as the British Isles. As the term British Isles is controversial in relation to Ireland, the alternate term "Ireland and Britain" (or "Britain and Ireland") is often used as a neutral term for the islands.
A ring of coastal mountains surround low plains at the centre of the island. The highest of these is Carrauntoohil (Irish: Corrán Tuathail) in County Kerry, which rises to 1,038 m (3,406 ft) above sea level. The most arable land lies in the province of Leinster. Western areas can be mountainous and rocky with green panoramic vistas. The River Shannon, the island's longest river at 386 km (240 mi) long, rises in County Cavan in the north west and flows 113 kilometres (70 mi) to Limerick city in the mid west.
The island's lush vegetation, a product of its mild climate and frequent rainfall, earns it the sobriquet the Emerald Isle. Overall, Ireland has a mild but changeable oceanic climate with few extremes. The climate is typically insular and is temperate avoiding the extremes in temperature of many other areas in the world at similar latitudes. This is a result of the moderating moist winds which ordinarily prevail from the South-Western Atlantic.
Precipitation falls throughout the year but is light overall, particularly in the east. The west tends to be wetter on average and prone to Atlantic storms, especially in the late autumn and winter months. These occasionally bring destructive winds and higher total rainfall to these areas, as well as sometimes snow and hail. The regions of north County Galway and east County Mayo have the highest incidents of recorded lightning annually for the island, with lightening occurring approximately five to ten days per year in these areas. Munster, in the south, records the least snow whereas Ulster, in the north, records the most.
Inland areas are warmer in summer and colder in winter. Usually around 40 days of the year are below freezing 0 °C (32 °F) at inland weather stations, compared to 10 days at coastal stations. Ireland is sometimes affected by heat waves, most recently in 1995, 2003 and 2006. In common with the rest of Europe, Ireland experienced unusually cold weather during the winter of 2009/10. Temperatures fell as low as −17.2 °C (1 °F) in County Mayo on December 20 and up to a metre (3 ft) of snow in mountainous areas.
The island consists of varied geological provinces. In the far west, around County Galway and County Donegal, is a medium to high grade metamorphic and igneous complex of Caledonide affinity, similar to the Scottish Highlands. Across southeast Ulster and extending southwest to Longford and south to Navan is a province of Ordovician and Silurian rocks, with similarities to the Southern Uplands province of Scotland. Further south, along the County Wexford coastline, is an area of granite intrusives into more Ordovician and Silurian rocks, like that found in Wales. In the southwest, around Bantry Bay and the mountains of Macgillicuddy's Reeks, is an area of substantially deformed, but only lightly metamorphosed, Devonian-aged rocks. This partial ring of "hard rock" geology is covered by a blanket of Carboniferous limestone over the centre of the country, giving rise to a comparatively fertile and lush landscape. The west-coast district of the Burren around Lisdoonvarna has well-developed karst features. Significant stratiform lead-zinc mineralisation is found in the limestones around Silvermines and Tynagh.
Hydrocarbon exploration is ongoing following the first major find at the Kinsale Head gas field off Cork in the mid-1970s. More recently, in 1999, economically significant finds of natural gas were made in the Corrib Gas Field off the County Mayo coast. This has increased activity off the west coast in parallel with the "West of Shetland" step-out development from the North Sea hydrocarbon province. The Helvick oil field, estimated to contain over 28 million barrels (4,500,000 m3) of oil, is another recent discovery.
There are three World Heritage Sites on the island: the Brú na Boinne, Skellig Michael and the Giant's Causeway. A number of other places are on the tentative list, for example the Burren and Mount Stewart.
Some of the most visited sites in Ireland include Bunratty Castle, the Rock of Cashel, the Cliffs of Moher, Holy Cross Abbey and Blarney Castle. Historically important monastic sites include Glendalough and Clonmacnoise, which are maintained as national monuments in the Republic of Ireland.
Dublin is the most heavily touristed region and home to several of the most popular attractions such as the Guinness Storehouse and Book of Kells. The west and south west, which includes the Lakes of Killarney and the Dingle peninsula in County Kerry and Connemara and the Aran Islands in County Galway, are also popular tourist destinations. Achill Island lies off the coast of County Mayo and is Ireland's largest island. It is a popular tourist destination for surfing and contains 5 Blue Flag beaches and Croaghaun one of the worlds highest sea cliffs. Stately homes, built during the 17th, 18th and 19th centuries in Palladian, Neoclassical and neo-Gothic styles, such as, Castle Ward, Castletown House, Bantry House, are also of interest to tourists. Some have been converted into hotels, such as Ashford Castle, Castle Leslie and Dromoland Castle.
As Ireland was isolated from mainland Europe by rising sea levels after the ice age, it has less diverse animal and plant species than either Great Britain or mainland Europe. There are 55 mammal species in Ireland and of them only 26 land mammal species are considered native to Ireland. Some species, such as, the red fox, hedgehog and badger, are very common, whereas others, like the Irish hare, red deer and pine marten are less so. Aquatic wildlife, such as species of sea turtle, shark, seal, whale, and dolphin, are common off the coast. About 400 species of birds have been recorded in Ireland. Many of these are migratory, including the Barn Swallow. Most of Ireland's bird species come from Iceland, Greenland and Africa.
Several different habitat types are found in Ireland, including farmland, open woodland, temperate broadleaf and mixed forests, conifer plantations, peat bogs and a variety of coastal habitats. However, agriculture drives current land use patterns in Ireland, limiting natural habitat preserves, particularly for larger wild mammals with greater territorial needs. With no top predator in Ireland, populations of animals, such as semi-wild deer, that cannot be controlled by smaller predators, such as the fox, are controlled by annual culling.
There are no snakes in Ireland and only one reptile (the common lizard) is native to the island. Extinct species include the Irish elk, the great auk and the wolf. Some previously extinct birds, such as, the Golden Eagle, have recently been reintroduced after decades of extirpation.
Until medieval times Ireland was heavily forested with oak, pine and birch. Forests today cover about 12.6% of Ireland, of which 4,450 km² or one million acres is owned by Coillte, the Republic's forestry service. The Republic lies in 42nd place (out of 55) in a list of the most forested countries in Europe. Much of the land is now covered with pasture and there are many species of wild-flower. Gorse (Ulex europaeus), a wild furze, is commonly found growing in the uplands and ferns are plentiful in the more moist regions, especially in the western parts. It is home to hundreds of plant species, some of them unique to the island, and has been "invaded" by some grasses, such as Spartina anglica.
Rarer species include:
The island has been invaded by some algae, some of which are now well established. For example:
Codium fragile ssp. atlanticum has recently been established to be native, although for many years it was regarded as an alien species.
Because of its mild climate, many species, including sub-tropical species such as palm trees, are grown in Ireland. Phytogeographically, Ireland belongs to the Atlantic European province of the Circumboreal Region within the Boreal Kingdom. The island itself can be subdivided into two ecoregions: the Celtic broadleaf forests and North Atlantic moist mixed forests.
The long history of agricultural production, coupled with modern intensive agricultural methods such as pesticide and fertiliser use and runoff from contaminants into streams, rivers and lakes, impact the natural fresh-water ecosystems and have placed pressure on biodiversity in Ireland.
A land of green fields for crop cultivation and cattle rearing limits the space available for the establishment of native wild species. Hedgerows, however, traditionally used for maintaining and demarcating land boundaries, act as a refuge for native wild flora. This ecosystem stretches across the countryside and acts as a network of connections to preserve remnants of the ecosystem that once covered the island. Subsidies under the Common Agricultural Policy, which supported agricultural practices that preserved hedgerow environments, are undergoing reforms. The Common Agricultural Policy had in the past subsidised potentially destructive agricultural practices, for example by emphasising production without placing limits on indiscriminate use of fertilisers and pesticides; but recent reforms have gradually decoupled subsidies from production levels and introduced environmental and other requirements.
Forest covers about 12.6% of the country, most of it designated for commercial production. Forested areas typically consist of monoculture plantations of non-native species, which may result in habitats that are not suitable for supporting native species of invertebrates. Remnants of native forest can be found scattered around the island, in particular in the Killarney National Park. Natural areas require fencing to prevent over-grazing by deer and sheep that roam over uncultivated areas. Grazing in this manner is one of the main factors preventing the natural regeneration of forests across many regions of the country.
People have lived in Ireland for over 9,000 years, although only a limited amount is known about the palaeolithic, neolithic and Bronze Age inhabitants of the island. Early historical and genealogical records note the existence of dozens of different peoples that may or may not be mythological, for example the Cruithne, Attacotti, Conmaicne, Eóganachta, Érainn, and Soghain, to name but a few. Over the past 1000 years or so, Vikings, Normans, Scots and English have all added to the Gaelic population and have had significant influences on Irish Culture.
Ireland's largest religious group is Christianity. The largest denomination is Roman Catholicism representing over 73% for the island (and about 87% of the Republic of Ireland). Most of the rest of the population adhere to one of the various Protestant denominations (about 53% of Northern Ireland). The largest is the Anglican Church of Ireland. The Muslim community is growing in Ireland, mostly through increased immigration. The island has a small Jewish community. About 4% of the Republic's population and about 14% of the Northern Ireland population describe themselves as of no religion. In a 2010 survey conducted on behalf of the Irish Times, 32% of respondents said they went to a religious service more than once a week.
The population of Ireland rose rapidly from the 16th century until the mid-19th century, but a devastating famine in the 1840s caused one million deaths and forced over one million more to emigrate in its immediate wake. Over the following century the population was reduced by over half, at a time when the general trend in European countries was for populations to rise by an average of three-fold.
Traditionally, Ireland is subdivided into four provinces: Connacht (west), Leinster (east), Munster (south), and Ulster (north). In a system that developed between the 13th and 17th centuries, Ireland has 32 traditional counties. Twenty-six of these counties are in the Republic of Ireland and six are in Northern Ireland. The six counties that constitute Northern Ireland are all in the province of Ulster (which has nine counties in total). As such, Ulster is often used as a synonym for Northern Ireland, although the two are not coterminous.
In the Republic of Ireland, counties form the basis of the system of local government. Counties Dublin, Cork, Limerick, Galway, Waterford and Tipperary have been broken up into smaller administrative areas. However, they are still treated as counties for cultural and some official purposes, for example postal addresses and by the Ordnance Survey Ireland. Counties in Northern Ireland are no longer used for local governmental purposes, but, as in the Republic, their traditional boundaries are still used for informal purposes such as sports leagues and in cultural or tourism contexts.
City status in Ireland is decided by legislative or royal charter. Dublin, with over 1 million residents in the Greater Dublin Area, is the largest city on the island. Belfast, with 276,459 residents, is the largest city in Northern Ireland. City status does not directly equate with population size. For example, Armagh, with 14,590 is the seat of the Church of Ireland and the Roman Catholic Primate of All Ireland and was re-granted city status by Queen Elizabeth II in 1994 (having lost that status in local government reforms of1840). In the Republic of Ireland, Kilkenny, seat of the Butler dynasty, while no longer a city for administrative purposes (since the 2001 Local Government Act), is entitled by law to continue to use the description.
|Cities and towns by populations|
The population of Ireland collapsed dramatically during the second half of the 19th century. A population of over 8 million in 1841 was reduced to slightly more than 4 million by 1921. In part, the fall in population was due to death from the Great Famine of 1845 to 1852, which took about 1 million lives. However, by far the greater cause of population decline was the dire economic state of the county which lead to an entrench culture of emigration lasting until the 21st century.
Emigration from Ireland in the 19th century contributed to the populations of England, the United States, Canada and Australia where today a large Irish diaspora lives. Today 4.3 million Canadians, or 14% of her population, are of Irish descent. A total of 36 million Americans claim Irish ancestry – more than 12% of the total population and 20% of the white population. Massachusetts is the most Irish of US states with 23.8% of the population claiming Irish ancestry. The pattern of immigration over this period particularly devastated the western and southern sea-boards. Prior to the Great Famine, the provinces of Connacht, Munster and Leinster were more or less evenly populated whereas Ulster was far less densely populated than the other three. Today, Ulster and Leinster, and in particular Dublin, have a far greater population density than Munster and Connacht.
With growing prosperity since the last decade of the 20th century, Ireland became a place of immigration. Since the European Union expanded to included Poland in 2004, Polish people have made up the largest number of immigrants (over 150,000) from Central Europe. There has also been significant immigration from Lithuania, the Czech Republic and Latvia.
The Republic of Ireland in particular has seen large-scale immigration. The 2006 census recorded that 420,000 foreign nationals, or about 10% of the population, lived in the Republic of Ireland. Chinese and Nigerians, along with people from other African countries, have accounted for a large proportion of the non–European Union migrants to Ireland. Up to 50,000 eastern European migrant workers may have left Ireland since the end of 2008.
Two main languages are spoken in Ireland; Irish and English and both languages have widely contributed to literature. Irish, now a minority but official language of the Republic of Ireland was the vernacular of the Irish people for over two thousand years and was probably introduced by some sort of proto-Gaelic migration during the Iron age, possibly earlier. It began to be written down after Christianisation in the 5th century and spread to Scotland and the Isle of Man where it evolved into the Scottish Gaelic and Manx languages respectively. It has a vast treasure of written texts from many centuries and is divided by linguists into Old Irish from the 6th to 10th century, Middle Irish from the 10th to 13th century and Early Modern Irish until the seventeenth century and evolved into the Modern Irish spoken today. It remained the dominant language of Ireland for most of those periods, having influences from Latin, Old Norse, French and English. It declined under British rule but remained the majority tongue until the early 19th century and since then has been a minority language although revival efforts are continuing in both the Republic of Ireland and Northern Ireland; however Gaeltacht or Irish speaking areas are still seeing a decline in the language. It is a compulsory subject in the state education system in the Republic, and the Gaelscoil movement has seen many Irish medium schools established in both jurisdictions.
English was first introduced to Ireland in the Norman invasion and was spoken by a few peasants and merchants brought over from England and was largely replaced by Irish before the Tudor Conquest of Ireland. It was introduced as the official language with the Tudor and Cromwellian conquests. The Ulster plantations gave it a permanent foothold in Ulster, and it remained the official and upper-class language elsewhere, the Irish-speaking chieftains and nobility having been deposed. Language shift during the 19th century replaced Irish with English as the first language for a vast majority of the population. Less than 10% of the population of the Republic of Ireland today speak Irish regularly outside of the education system and 38% of those over 15 years are classified as "Irish speakers." In Northern Ireland, English is the de facto official language, but official recognition is afforded to Irish, including specific protective measures under Part III of the European Charter for Regional or Minority Languages. A lesser status (including recognition under Part II of the Charter) is given to Ulster Scots dialects, which are also spoken by some in the Republic of Ireland. In recent decades, with the increase in immigration, many more languages have been introduced, particularly deriving from Asia and Eastern Europe.
Ireland's culture comprises elements of the culture of ancient immigration and influences (such as Gaelic culture) and more recent Anglicisation and Americanisation as well as participation in a broader European culture. In broad terms, Ireland is regarded as one of the Celtic nations of Europe, which also includes Scotland, Wales, Cornwall, Isle of Mann and Brittany. This combination of cultural influences is visible in the intricate designs termed Irish interlace or Celtic knotwork. These can be seen in the ornamentation of medieval religious and secular works. The style is still popular today in jewellery and graphic art, as is the distinctive style of traditional Irish music and dance, and has become indicative of modern "Celtic" culture in general.
Religion has played a significant role in the cultural life of the island since ancient times (and since the 17th century plantations, has been the focus of political identity and divisions on the island). Ireland's pre-Christian heritage fused with the Celtic Church following the missions of Saint Patrick in the 5th century. The Hiberno-Scottish missions, begun by the Irish monk Saint Columba, spread the Irish vision of Christianity to pagan England and the Frankish Empire. These missions brought written language to an illiterate population of Europe during the Dark Ages that followed the fall of Rome, earning Ireland the sobriquet, "the island of saints and scholars". In more recent years, the Irish pubs have become outposts of Irish culture worldwide.
The Republic of Ireland's national theatre is the Abbey Theatre founded in 1904 and the national Irish-language theatre is An Taibhdhearc, established in 1928 in Galway. Playwrights such as Seán O'Casey, Brian Friel, Sebastian Barry, Conor McPherson and Billy Roche are internationally renowned.
There are a number of languages used in Ireland. Irish is the only language to have originated from within the island. Since the late 19th century, English has become the predominant first language having been a spoken language in Ireland since the Middle Ages. A large minority claim some ability to speak Irish today, although it is the first language only of a small percentage of the population. Under Constitution of Ireland, both languages have official status with Irish being the national and first official language. In Northern Ireland English is the dominant state language, whilst Irish and Ulster Scots are recognised minority languages.
Ireland has made a large contribution to world literature in all its branches, particularly in the English language. Poetry in Irish is the oldest vernacular poetry in Europe, with the earliest examples dating from the 6th century. In English, Jonathan Swift, still often called the foremost satirist in the English language, was wildly popular in his day for works such as Gulliver's Travels and A Modest Proposal and Oscar Wilde is known most for his often quoted witticisms.
In the 20th century, Ireland produced four winners of the Nobel Prize for Literature: George Bernard Shaw, William Butler Yeats, Samuel Beckett and Seamus Heaney. Although not a Nobel Prize winner, James Joyce is widely considered to be one of the most significant writers of the 20th century. Joyce's 1922 novel Ulysses is considered one of the most important works of Modernist literature and his life is celebrated annually on 16 June in Dublin as "Bloomsday". Modern Irish literature is often connected with its rural heritage through writers such as John McGahern and poets such as Seamus Heaney.
The Irish traditional music and dance has seen a recent surge in popularity, not least through the phenomenon of Riverdance, a theatrical performance of Irish traditional dancing. In the middle years of the 20th century, as Irish society was modernising, traditional music fell out of favour, especially in urban areas. During the 1960s, inspired by the American folk music movement, there was a revival of interest in Irish traditional music led by groups such as The Dubliners, The Chieftains, Emmet Spiceland, The Wolfe Tones, the Clancy Brothers, Sweeney's Men and individuals like Seán Ó Riada and Christy Moore.
Groups and musicians including Horslips, Van Morrison and Thin Lizzy incorporated elements of Irish traditional music into contemporary rock music and, during the 1970s and 1980s, the distinction between traditional and rock musicians became blurred, with many individuals regularly crossing over between these styles of playing. This trend can be seen more recently in the work of artists like Enya, The Saw Doctors, The Corrs, Sinéad O'Connor, Clannad, The Cranberries, Black 47 and The Pogues among others.
During the 1990s a sub-genre of folk metal emerged in Ireland that fused heavy metal music with Irish and Celtic music. The pioneers of this sub-genre were Cruachan, Primordial, and Waylander. Some contemporary music groups stick closer to a "traditional" sound, including Altan, Téada, Danú, Dervish, Lúnasa, and Solas. Others incorporate multiple cultures in a fusion of styles, such as Afro Celt Sound System and Kíla.
The earliest known Irish graphic art and sculpture are Neolithic carvings found at sites such as Newgrange and is traced through Bronze age artefacts and the religious carvings and illuminated manuscripts of the medieval period. During the course of the 19th and 20th centuries, a strong tradition of painting emerged, including such figures as John Butler Yeats, William Orpen, Jack Yeats and Louis le Brocquy.
The Irish philosopher and theologian Johannes Scotus Eriugena was considered one of the leading intellectuals of his early Middle Ages. Sir Ernest Henry Shackleton, an Anglo-Irish explorer, was one of the principal figures of Antarctic exploration. He, along with his expedition, made the first ascent of Mount Erebus and the discovery of the approximate location of the South Magnetic Pole. Robert Boyle was a 17th-century natural philosopher, chemist, physicist, inventor and early gentleman scientist. He is largely regarded one of the founders of modern chemistry and is best known for the formulation of Boyle's law. 19th century physicist, John Tyndall, discovered the Tyndall effect, which explains why the sky is blue. Father Nicholas Joseph Callan, Professor of Natural Philosophy in Maynooth College, is best known for his invention of the induction coil, transformer and he discovered an early method of galvanisation in the 19th century.
Other notable Irish physicists include Ernest Walton, winner of the 1951 Nobel Prize in Physics. With Sir John Douglas Cockcroft, he was the first to split the nucleus of the atom by artificial means and made contributions to the development of a new theory of wave equation. William Thomson, or Lord Kelvin, is the person whom the absolute temperature unit, the Kelvin, is named after. Sir Joseph Larmor, a physicist and mathematician, made innovations in the understanding of electricity, dynamics, thermodynamics and the electron theory of matter. His most influential work was Aether and Matter, a book on theoretical physics published in 1900.
George Johnstone Stoney introduced the term electron in 1891. John Stewart Bell was the originator of Bell's Theorem and a paper concerning the discovery of the Bell-Jackiw-Adler anomaly and was nominated for a Nobel prize. Notable mathematicians include Sir William Rowan Hamilton, famous for work in classical mechanics and the invention of quaternions. Francis Ysidro Edgeworth's contribution of the Edgeworth Box remains influential in neo-classical microeconomic theory to this day; while Richard Cantillon inspired Adam Smith, among others. John B. Cosgrave was a specialist in number theory and discovered a 2000-digit prime number in 1999 and a record composite Fermat number in 2003. John Lighton Synge made progress in different fields of science, including mechanics and geometrical methods in general relativity. He had mathematician John Nash as one of his students.
Ireland has nine universities, seven in the Republic of Ireland and two in Northern Ireland, including Trinity College, Dublin and the University College Dublin, as well as numerous third-level colleges and institutes and a branch of the Open University, the Open University in Ireland.
The island of Ireland fields a single international team in most sports. One notable exception to this is Association football, although both associations continued to field international teams under the name "Ireland" until the 1950s. An all-Ireland club competition for soccer, the Setanta Cup, was created in 2005.
Gaelic football is the most popular sport in Ireland in terms of match attendance and community involvement, with about 2,600 clubs on the island. In 2003 it represented 34% of total sports attendances at events in Ireland and abroad, followed by hurling at 23%, soccer at 16% and rugby at 8% and the All-Ireland Football Final is the most watched event in the sporting calendar. Soccer is the most widely-played team game on the island, and the most popular in Northern Ireland. Swimming, golf, aerobics, soccer, cycling, Gaelic football and billiards/snooker are the sporting activities with the highest levels of playing participation. The sport is also the most notable exception where the Republic of Ireland and Northern Ireland field separate international teams.
In recent years ice hockey has seen an increase in popularity, notably with the Belfast Giants ice hockey team in Northern Ireland. Northern Ireland have also produced two World Snooker Champions. Many other sports are also played and followed, including basketball, boxing, cricket, fishing, greyhound racing, handball, hockey, horse racing, motor sport, show jumping and tennis.
Gaelic football, hurling and handball are the best-known of the Irish traditional sports, collectively known as Gaelic games. Gaelic games are governed by the Gaelic Athletic Association (GAA), with the exception of ladies' Gaelic football and camogie (women's variant of hurling), which are governed by separate organisations. The headquarters of the GAA (and the main stadium) is located at the 82,500 capacity Croke Park in north Dublin. Many major GAA games are played there, including the semi-finals and finals of the All-Ireland Senior Football Championship and All-Ireland Senior Hurling Championship. During the redevelopment of the Lansdowne Road stadium in 2007–10, international rugby and soccer were played there. All GAA players, even at the highest level, are amateurs, receiving no wages, although they are permitted to receive a limited amount of sport-related income from commercial sponsorship.
The Irish Football Association (IFA) was originally the governing body for soccer across the island. The game has been played in an organised fashion in Ireland since the 1870s, with Cliftonville F.C. in Belfast being Ireland's oldest club. It was most popular, especially in its first decades, around Belfast and in Ulster. However, some clubs based outside Belfast thought that the IFA largely favoured Ulster-based clubs in such matters as selection for the national team. In 1921, following an incident in which, despite an earlier promise, the IFA moved an Irish Cup semi-final replay from Dublin to Belfast
Dublin-based clubs broke away to form the Football Association of the Irish Free State. Today the southern association is known as the Football Association of Ireland (FAI). Despite being initially blacklisted by the Home Nations' associations, the FAI was recognised by FIFA in 1923 and organised its first international fixture in 1926 (against Italy). However, both the IFA and FAI continued to select their teams from the whole of Ireland, with some players earning international caps for matches with both teams. Both also referred to their respective teams as Ireland.
In 1950, FIFA directed the associations only to select players from within their respective territories and, in 1953, directed that the FAI's team be known only as "Republic of Ireland" and that the IFA's team be known as "Northern Ireland" (with certain exceptions). Northern Ireland qualified for the World Cup finals in 1958 (reaching the quarter-finals), 1982 and 1986. The Republic qualified for the World Cup finals in 1990 (reaching the quarter-finals), 1994, 2002 and the European Championships in 1988 and 2012. Across Ireland, there is significant interest in the English and, to a lesser extent, Scottish soccer leagues.
Unlike soccer, Ireland continues to field a single national rugby team and a single association, the Irish Rugby Football Union (IRFU), governs the sport across the island. The Irish rugby team have played in every Rugby World Cup, making the quarter-finals in four of them. Ireland also hosted games during the 1991 and the 1999 Rugby World Cups (including a quarter-final). There are four professional Irish teams; all four play in the Magners League (now called the RaboDirect Pro12) and at least three compete for the Heineken Cup. Irish rugby has become increasingly competitive at both the international and provincial levels since the sport went professional in 1994. During that time, Ulster (1999), Munster (2006 and 2008) and Leinster (2009 and 2011) have won the Heineken Cup. In addition to this, the Irish International side has had increased success in the Six Nations Championship against the other European elite sides. This success, including Triple Crowns in 2004, 2006 and 2007, culminated with a clean sweep of victories, known as a Grand Slam, in 2009.
The Ireland cricket team was among the associate nations that qualified for the 2007 Cricket World Cup. It defeated Pakistan and finished second in its pool, earning a place in the Super 8 stage of the competition. The team also competed in the 2009 ICC World Twenty20 after jointly winning the qualifiers, where they also made the Super 8 stage. Ireland also won the 2009 ICC World Cup Qualifier to secure their place in the 2011 Cricket World Cup, as well as official ODI status through 2013. Kevin O'Brien scored the fastest century in Word Cup history (113 runs off 63 balls), as Ireland produced one of the great upsets to defeat England by 3 wickets in the 2011 tournament
Rugby league in Ireland is governed by Rugby League Ireland, which runs the Irish Elite League, there are currently 20 teams across Ulster, Munster and Leinster. The Irish rugby league team is made up predominantly of players based in Ireland, England and Australia. Ireland reached the quarter-finals of the 2000 Rugby League World Cup as well as reaching the semi finals in the 2008 Rugby League World Cup.
Horse racing and greyhound racing are both popular in Ireland. There are frequent horse race meetings and greyhound stadiums are well-attended. The island is noted for the breeding and training of race horses and is also a large exporter of racing dogs. The horse racing sector is largely concentrated in the County Kildare.
Irish athletics has seen some development in recent times, with Sonia O'Sullivan winning two notable medals at 5,000 metres; gold at the 1995 World Championships and silver at the 2000 Sydney Olympics. Gillian O'Sullivan won silver in the 20k walk at the 2003 World Championships, while sprint hurdler Derval O'Rourke won gold at the 2006 World Indoor Championship in Moscow. Olive Loughnane won a silver medal in the 20k walk in the World Athletics Championships in Berlin in 2009.
Ireland has won more medals in boxing than in any other Olympic sport. Boxing is governed by the Irish Amateur Boxing Association. Michael Carruth won a gold medal in the Barcelona Olympic Games and in 2008 Kenneth Egan won a silver medal in the Beijing Games. Paddy Barnes secured bronze in those games and gold in the 2010 European Amateur Boxing Championships (where Ireland came 2nd in the overall medal table) and 2010 Commonwealth Games. Katie Taylor has won gold in every European and World championship since 2005.
Golf is very popular and golf tourism is a major industry attracting more than 240,000 golfing visitors annually. The 2006 Ryder Cup was held at The K Club in County Kildare. Pádraig Harrington became the first Irishman since Fred Daly in 1947 to win the British Open at Carnoustie in July 2007. He successfully defended his title in July 2008 before going on to win the PGA Championship in August. Harrington became the first European to win the PGA Championship in 78 years and was the first winner from Ireland. Three golfers from Northern Ireland have been particularly successful. In 2010, Graeme McDowell became the first Irish golfer to win the U.S. Open, and the first European to win that tournament since 1970. Rory McIlroy, at the age of 22, won the 2011 U.S. Open, while Darren Clarke's latest victory was the 2011 Open Championship at Royal St. George.
The west coast of Ireland, Lahinch and Donegal Bay in particular, have popular surfing beaches, being fully exposed to the Atlantic Ocean. Donegal Bay is shaped like a funnel and catches west/south-west Atlantic winds, creating good surf, especially in winter. In recent years, Bundoran has hosted European championship surfing. Scuba diving is increasingly popular in Ireland with clear waters and large populations of sea life, particularly along the western seaboard. There are also many shipwrecks along the coast of Ireland, with some of the best wreck dives being in Malin Head and off the County Cork coast.
With thousands of lakes, over 14,000 kilometres (8,700 mi) of fish bearing rivers and over 3,700 kilometres (2,300 mi) of coastline, Ireland is a popular angling destination. The temperate Irish climate is suited to sport angling. While salmon and trout fishing remain popular with anglers, salmon fishing in particular received a boost in 2006 with the closing of the salmon driftnet fishery. Coarse fishing continues to increase its profile. Sea angling is developed with many beaches mapped and signposted, and the range of sea angling species is around 80.
Food and cuisine in Ireland takes its influence from the crops grown and animals farmed in the island's temperate climate and from the social and political circumstances of Irish history. For example, whilst from the Middle Ages until the arrival of the potato in the 16th century the dominant feature of the Irish economy was the herding of cattle, the number of cattle a person owned was equated to their social standing. Thus herders would avoid slaughtering a milk-producing cow.
For this reason, pork and white meat were more common than beef and thick fatty strips of salted bacon (or rashers) and the eating of salted butter (i.e. a dairy product rather than beef itself) have been a central feature of the diet in Ireland since the Middle Ages. The practice of bleeding cattle and mixing the blood with milk and butter (not unlike the practice of the Maasai) was common and black pudding, made from blood, grain (usually barley) and seasoning, remains a breakfast staple in Ireland. All of these influences can be seen today in the phenomenon of the "breakfast roll".
The introduction of the potato in the second half of the 16th century heavily influenced cuisine thereafter. Great poverty encouraged a subsistence approach to food and by the mid-19th century the vast majority of the population sufficed with a diet of potatoes and milk. A typical family, consisting of a man, a woman and four children, would eat 18 stone (110 kg) of potatoes a week. Consequently, dishes that are considered as national dishes represent a fundamental unsophistication to cooking, such as the Irish stew, bacon and cabbage, boxty, a type of potato pancake, or colcannon, a dish of mashed potatoes and kale or cabbage.
Since the last quarter of the 20th century, with a re-emergence of wealth in Ireland, a "New Irish Cuisine" based on traditional ingredients incorporating international influences has emerged. This cuisine is based on fresh vegetables, fish (especially salmon, trout, oysters, mussels and other shellfish), as well as traditional soda breads and the wide range of hand-made cheeses that are now being produced across the country. The potato remains however a fundamental feature of this cuisine and the Irish remain the highest per capita consumers of potatoes in Europe. An example of this new cuisine is "Dublin Lawyer": lobster cooked in whiskey and cream. Traditional regional foods can be found throughout the country, for example coddle in Dublin or drisheen in Cork, both a type of sausage, or blaa, a doughy white bread particular to Waterford.
Ireland once dominated the world's market for whiskey, producing 90% of the world's whiskey at the start of the 20th century. However, as a consequence of bootleggers during the prohibition in the United States (who sold poor-quality whiskey bearing Irish-sounding names thus eroding the pre-prohibition popularity for Irish brands) and tariffs on Irish whiskey across British Empire during the Anglo-Irish Trade War of the 1930s, sales of Irish whiskey worldwide fell to a mere 2% by the mid-20th century. In 1953, an Irish government survey, found that 50 per cent of whiskey drinkers in the United States had never heard of Irish whiskey.
Irish whiskey, however, remained popular domestically and in recent decades has grown in popularity again internationally. Typically, Irish whiskey is not as smoky as a Scotch whisky, but not as sweet as American or Canadian whiskies. Whiskey forms the basis of traditional cream liqueurs, such as Baileys, and the "Irish coffee" (a cocktail of coffee and whiskey reputedly invented at Foynes flying-boat station) is probably the best-known Irish cocktail.
Stout, a kind of porter beer, particularly Guinness, is typically associated with Ireland, although historically it was more closely associated with London. Porter remains very popular, although it has lost sales since the mid-20th century to lager. Cider, particularly Magners (marketed in the Republic of Ireland as Bulmers), is also a popular drink. Red lemonade, a soft-drink, is consumed on its own and as a mixer, particularly with whiskey.
|Wikiquote has a collection of quotations related to: Ireland|
|Wikimedia Commons has media related to: Ireland|
The World News (WN) Network, has created this privacy statement in order to demonstrate our firm commitment to user privacy. The following discloses our information gathering and dissemination practices for wn.com, as well as e-mail newsletters.
We do not collect personally identifiable information about you, except when you provide it to us. For example, if you submit an inquiry to us or sign up for our newsletter, you may be asked to provide certain information such as your contact details (name, e-mail address, mailing address, etc.).
We may retain other companies and individuals to perform functions on our behalf. Such third parties may be provided with access to personally identifiable information needed to perform their functions, but may not use such information for any other purpose.
In addition, we may disclose any information, including personally identifiable information, we deem necessary, in our sole discretion, to comply with any applicable law, regulation, legal proceeding or governmental request.
We do not want you to receive unwanted e-mail from us. We try to make it easy to opt-out of any service you have asked to receive. If you sign-up to our e-mail newsletters we do not sell, exchange or give your e-mail address to a third party.
E-mail addresses are collected via the wn.com web site. Users have to physically opt-in to receive the wn.com newsletter and a verification e-mail is sent. wn.com is clearly and conspicuously named at the point ofcollection.
If you no longer wish to receive our newsletter and promotional communications, you may opt-out of receiving them by following the instructions included in each newsletter or communication or by e-mailing us at michaelw(at)wn.com
The security of your personal information is important to us. We follow generally accepted industry standards to protect the personal information submitted to us, both during registration and once we receive it. No method of transmission over the Internet, or method of electronic storage, is 100 percent secure, however. Therefore, though we strive to use commercially acceptable means to protect your personal information, we cannot guarantee its absolute security.
If we decide to change our e-mail practices, we will post those changes to this privacy statement, the homepage, and other places we think appropriate so that you are aware of what information we collect, how we use it, and under what circumstances, if any, we disclose it.
If we make material changes to our e-mail practices, we will notify you here, by e-mail, and by means of a notice on our home page.
The advertising banners and other forms of advertising appearing on this Web site are sometimes delivered to you, on our behalf, by a third party. In the course of serving advertisements to this site, the third party may place or recognize a unique cookie on your browser. For more information on cookies, you can visit www.cookiecentral.com.
As we continue to develop our business, we might sell certain aspects of our entities or assets. In such transactions, user information, including personally identifiable information, generally is one of the transferred business assets, and by submitting your personal information on Wn.com you agree that your data may be transferred to such parties in these circumstances. | 1 | 18 |
<urn:uuid:5183f186-0a78-4a1a-9cdb-cb5fb9581b4c> | |Chronic granulomatous disease|
Classification and external resources
|OMIM||306400 233690 233700|
Chronic granulomatous disease (CGD) is a diverse group of hereditary diseases in which certain cells of the immune system have difficulty forming the reactive oxygen compounds (most importantly, the superoxide radical) used to kill certain ingested pathogens. Superoxide is the Anion O2&minus It is important as the product of the one-electron reduction of Dioxygen, which occurs widely in nature The International Statistical Classification of Diseases and Related Health Problems (most commonly known by the abbreviation ICD) provides codes to classify Diseases The International Statistical Classification of Diseases and Related Health Problems 10th Revision ( ICD -10) is a coding of diseases and signs symptoms abnormal findings The International Statistical Classification of Diseases and Related Health Problems (most commonly known by the abbreviation ICD) provides codes to classify Diseases The following is a list of codes for International Statistical Classification of Diseases and Related Health Problems. The Mendelian Inheritance in Man project is a Database that catalogues all the known Diseases with a genetic component, and—when possible—links them The Diseases Database is a free Website that provides information about the relationships between medical conditions Symptoms, and Medications. MedlinePlus, with the MedlinePlus Medical Encyclopedia, is a website network containing Health information from the world's largest medical Library eMedicine is an online clinical medical knowledge base that was founded in 1996 by Scott Plantz and Richard Lavely two medical doctors Medical Subject Headings ( MeSH) is a huge Controlled vocabulary (or metadata system for the purpose of indexing journal articles and books A genetic disorder is a condition caused by abnormalities in Genes or Chromosomes While some diseases such as Cancer, are due to genetic abnormalities acquired An immune system is a collection of mechanisms within an Organism that protects against Disease by identifying and killing Pathogens and Tumor Oxygen (from the Greek roots ὀξύς (oxys (acid literally "sharp" from the taste of acids and -γενής (-genēs (producer literally begetteris the Superoxide is the Anion O2&minus It is important as the product of the one-electron reduction of Dioxygen, which occurs widely in nature A pathogen (from Greek πάθος pathos "suffering passion" and γἰγνομαι (γεν- gignomai (gen- "I give birth to" infectious This leads to the formation of granulomata in many organs. In Medicine ( Anatomical pathology) a granuloma (classical Latin plural granulomata; modern anglicized plural granulomas, also accepted CGD affects about 1 in 200,000 people in the United States, with at least 20 new cases diagnosed each year. The United States of America —commonly referred to as the
This condition was first described in 1957 as "a fatal granulomatosus of childhood". The underlying cellular mechanism that causes chronic granulomatous disease was discovered in 1967, and research since that time has further elucidated the molecular mechanisms underlying the disease.
Phagocytes (i. e. , neutrophils, monocytes, and macrophages) require an enzyme to produce reactive oxygen species to destroy bacteria after they ingest the bacteria in a process called phagocytosis. Neutrophil granulocytes, generally referred to as neutrophils, are the most abundant type of White blood cells in humans and form an essential part of the Monocyte is a type of Leukocyte, part of the Human body 's Immune system. Macrophages ( Greek: "big eaters" from makros "large" + phagein "eat" ( Mø) are cells within the tissues that Enzymes are Biomolecules that catalyze ( ie increase the rates of Chemical reactions Almost all enzymes are Proteins Reactive oxygen species (ROS are ions or very small molecules that include Oxygen Ions free radicals, and Peroxides both inorganic and The Bacteria ( singular: bacterium) are a large group of unicellular Microorganisms Typically a few Micrometres in length bacteria have Phagocytosis is the cellular process of engulfing solid particles by the Cell membrane to form an internal Phagosome, or "food vacuole This enzyme is termed "phagocyte NADPH oxidase" (PHOX). The initial step in this process involves the one-electron reduction of molecular oxygen to produce superoxide free radical. Superoxide is the Anion O2&minus It is important as the product of the one-electron reduction of Dioxygen, which occurs widely in nature Superoxide then undergoes a further series of reactions to produce products such as peroxide, hydroxyl radical and hypochlorite. The reactive oxygen species this enzyme produces are toxic to bacteria and help the phagocyte kill them once they are ingested. Reactive oxygen species (ROS are ions or very small molecules that include Oxygen Ions free radicals, and Peroxides both inorganic and Defects in one of the four essential subunits of this enzyme can all cause CGD of varying severity, dependent on the defect. There are over 410 known possible defects in the PHOX enzyme complex that can lead to chronic granulomatous disease.
Most cases of chronic granulomatous disease are transmitted as a mutation on the X chromosome and are thus called an "X-linked trait". The X chromosome is one of the two sex-determining Chromosomes in many animal species including mammals (the other is the Y chromosome) The affected gene on the X chromosome codes for the gp91 protein p91-PHOX (p is the weight of the protein in kDa; the g means glycoprotein). Proteins are large Organic compounds made of Amino acids arranged in a linear chain and joined together by Peptide bonds between the Carboxyl Cytochrome b-245 beta polypeptide (chronic granulomatous disease, also known as CYBB and P91-PHOX, is a human Gene encoding a Glycoprotein The unified atomic mass unit ( u) or Dalton ( Da) or sometimes universal mass unit, is an unit of Mass used to express Not to be confused with Peptidoglycan. Glycoproteins are proteins that contain Oligosaccharide chains ( Glycans) covalently attached CGD can also be transmitted in an autosomal recessive fashion (via CYBA and NCF1) and affects other PHOX proteins. Cytochrome b-245 alpha polypeptide, also known as CYBA, is a human Gene. Neutrophil cytosolic factor 1 (chronic granulomatous disease autosomal 1, also known as NCF1, is a human Gene. The type of mutation that causes both types of CGD are varied and may be deletions, frame-shift, nonsense, and missense.
A low level of NADPH, the cofactor required for superoxide synthesis, can lead to CGD. This has been reported in women who are homozygous for the genetic defect causing glucose-6-phosphate dehydrogenase deficiency (G6PD), which is characterised by reduced NADPH levels. Zygosity refers to the genetic condition of a Zygote. In genetics zygosity describes the similarity or dissimilarity of DNA between Homologous Glucose-6-phosphate dehydrogenase deficiency is an X-linked recessive Hereditary disease characterised by abnormally low levels of Glucose-6-phosphate
Classically, patients with chronic granulomatous disease will suffer from recurrent bouts of infection due to the decreased capacity of their immune system to fight off disease-causing organisms. The recurrent infections they acquire are specific and are, in decreasing order of frequency:
People with CGD are sometimes infected with unique organisms that usually do not cause disease in people with normal immune systems. Some of the organisms that cause disease in CGD patients are Staphylococcus aureus, Escheria coli, Klebsiella species, Aspergillus species, and Candida species. Staphylococcus aureus (ˌstæfɨləˈkɒkəs ˈɔriəs literally "Golden Cluster Seed" and also known as golden staph) is the most common cause of Klebsiella is a Genus of non-motile, Gram-negative, Oxidase-negative Bacteria with a prominent Polysaccharide Aspergillus is a Genus of around 200 Molds found throughout much of nature worldwide Candida is a Genus of Yeasts Many species of this genus are Endosymbionts
Aspergillus has a unique propensity to cause infection in people with CGD. Of the Aspergillus species, Aspergillus fumigatus seems to be the one that most commonly causes disease. Aspergillus fumigatus is a Fungus of the Genus Aspergillus, and is one of the most common Aspergillus species to cause disease in
Most people with CGD are diagnosed in childhood, usually before age 5. Early diagnosis is important since these people can be placed on antibiotics to ward off infections before they occur.
The nitroblue-tetrazolium (NBT) test is the original and most widely-known test for chronic granulomatous disease. Nitro blue tetrazolium is a Chemical compound composed of two Tetrazole moieties. It is negative in CGD, and positive in normal individuals. This test depends upon the direct reduction of NBT by superoxide free radical to form an insoluble formazan. Superoxide is the Anion O2&minus It is important as the product of the one-electron reduction of Dioxygen, which occurs widely in nature This test is simple to perform and gives rapid results, but only tells whether or not there is a problem with the PHOX enzymes, not how much they are affected. An advanced test called the cytochrome C reduction assay tells physicians how much superoxide a patient's phagocytes can produce. Once the diagnosis of CGD is established, a genetic analysis may be used to determine exactly which mutation is the underlying cause.
Management of chronic granulomatous disease revolves around two goals: 1) diagnose the disease early so that antibiotics can be given to keep an infection from occurring, and 2) educate the patient about his or her condition so that prompt treatment can be given if an infection occurs. In modern usage an antibiotic is a Chemotherapeutic agent with activity against Microorganisms such as Bacteria, fungi or Protozoa
Physicians often prescribe the antibiotic trimethoprim-sulfamethoxazole to prevent bacterial infections. Co-trimoxazole (abbreviated SXT TMP-SMX TMP-SMZ or TMP-sulfa is an Sulphonamide, Antibacterial combination of Trimethoprim and Sulfamethoxazole This drug also has the benefit of sparing the normal bacteria of the digestive tract. Fungal infection is commonly prevented with itraconazole, although a newer drug of the same type called voriconazole may be more effective. Itraconazole (marketed as Sporanox by Janssen Pharmaceutica) invented in 1984 is a Triazole Antifungal agent that is prescribed to patients Voriconazole (VFEND Pfizer) is a Triazole Antifungal Medication that is generally used to treat serious invasive fungal The use of this drug for this purpose is still under scientific investigation.
Interferon, in the form of interferon gamma-1b (Actimmune) is approved by the Food and Drug Administration for the prevention of infection in CGD. Interferons ( IFN s are natural Proteins produced by the cells of the Immune system of most Vertebrates in response to challenges by foreign agents Interferon-gamma ( IFN-γ) is a Dimerized soluble Cytokine that is the only member of the type II class of Interferons This interferon was originally It has been shown to prevent infections in CGD patients by 70% and to reduce their severity. Although its exact mechanism is still not entirely understood, it has the ability to give CGD patients more immune function and therefore, greater ability to fight off infections. This therapy has been standard treatment for CGD for several years.
Gene therapy is currently being studied as a possible treatment for chronic granulomatous disease. CGD is well-suited for gene therapy since it is caused by a mutation in single gene which only affects one body system (the hematopoietic system). Haematopoiesis (from Ancient Greek haima blood poiesis to make (or hematopoiesis in the United States sometimes also haemopoiesis or Viruses have been used to deliver a normal gp91 gene to rats with a mutation in this gene, and subsequently the phagocytes in these rats were able to produce oxygen radicals. In Chemistry, radicals (often referred to as free radicals) are atoms molecules or ions with Unpaired electrons on an otherwise Open shell
In 2006, two human patients with X-linked chronic granulomatous disease underwent gene therapy and blood cell precursor stem cell transplantation to their bone marrow. Stem cells are cells found in most if not all multi-cellular Organisms. Bone marrow is the flexible tissue found in the hollow interior of Bones In adults marrow in large bones produces new Blood cells It constitutes 4% of Both patients recovered from their CGD, clearing pre-existing infections and demonstrating increased oxidase activity in their neutrophils. However, long-term complications of this therapy are unknown.
The prognosis of chronic granulomatous disease is guarded, with long-term outcomes closely tied to early diagnosis and early therapeutic intervention. With increasing treatment options for CGD the life-span for these patients is expected to also increase. | 1 | 2 |
<urn:uuid:c9ab1548-092f-4a0f-99e9-323cf29598e4> | Internetwork Design Guide -- Designing APPN Internetworks
Advanced peer-to-peer networking (APPN) is a second generation of the Systems Network Architecture (SNA) from IBM. It moves SNA from a hierarchical, mainframe-centric environment to a peer-to-peer environment. It provides capabilities similar to other LAN protocols, such as dynamic resource definition and route discovery.
This article focuses on developing the network design and planning a successful migration to APPN. It covers the following topics:
- Evolution of SNA
- When to Use APPN as Part of a Network Design
- When to Use APPN Versus Alternative Methods of SNA Transport
- Overview of APPN
- Scalability Issues
- Backup Techniques in an APPN Network
- APPN in a Multiprotocol Environment
- Network Management
- Configuration Examples
|Note:||Although this article does discuss using APPN with DLSw+, for detailed information on using DLSw+, refer to Designing DLSw+ Internetworks.|
Evolution of SNA
Introduced in 1974, subarea SNA made the mainframe computer running Advanced Communications Function/Virtual Telecommunication Access Method (ACF/VTAM) the hub of the network. The mainframe was responsible for establishing all sessions (a connection between two resources over which data can be sent), activating resources, and deactivating resources. The design point of subarea SNA was reliable delivery of information across low-speed analog lines. Resources were explicitly predefined. This eliminated the need for broadcast traffic and minimized header overhead.
Many enterprises today maintain two networks: a traditional, hierarchical SNA subarea network and an interconnected LAN network that is based on connectionless, dynamic protocols. The advantage of the subarea SNA network is that it is manageable and provides predictable response time. The disadvantages are that it requires extensive system definition and does not take advantage of the capabilities of intelligent devices (for example, the PCs and workstations).
Role of APPN
With APPN, you can consolidate the two networks (an SNA subarea network and an interconnected LAN network) because APPN has many of the characteristics of the LAN networks and still offers the advantages of an SNA network. The major benefits of using APPN include the following:
- Connections are peer-to-peer, allowing any end user to initiate a connection with any other end user without the mainframe (VTAM) involvement.
- APPN supports subarea applications as well as newer peer-to-peer applications over a single network.
- APPN provides an effective routing protocol to allow SNA traffic to flow natively and concurrently with other protocols in a single network.
- Traditional SNA class of service (COS)/transmission priority can be maintained.
As SNA has evolved, one feature has remained critical to many users: COS. This feature provides traffic prioritization on an SNA session basis on the backbone. This, in turn, allows a single user to have sessions with multiple applications, each with a different COS. In APPN, this feature offers more granularity and extends this capability all the way to the end node rather than just between communication controllers.
Types of APPN Nodes
An APPN network has three types of nodes: LEN nodes, end nodes (EN), and network nodes (NN), as shown in Figure: Different types of APPN nodes.
Figure: Different types of APPN nodes
|Note:||Throughout the rest of this article, the abbreviations EN and NN are used in the illustrations. The full terms (end node and network node) are used within the text for clarity.|
Table: Different Types of APPN Nodes describes these different types of APPN nodes. The control point (CP), which is responsible for managing a node's resources and adjacent node communication in APPN, is key to an APPN node. The APPN Control Point is the APPN equivalent of the SSCP.
Table: Different Types of APPN Nodes
|Type of APPN Node||Description|
Local Entry Networking (LEN) nodes
LEN nodes are pre-APPN, peer-to-peer nodes. They can participate in an APPN network by using the services provided by an adjacent network node. The CP of the LEN node manages the local resources but does not establish a CP-CP session with the adjacent network node. Session partners must be predefined to the LEN node, and the LEN node must be predefined to the adjacent network node. LEN nodes are also referred to as SNA node type 2.1, physical unit (PU) type 2.1, or PU2.1.
End nodes contain a subset of full APPN functionality. They access the network through an adjacent network node and use the adjacent network node's routing services. An end node establishes a CP-CP session with an adjacent network node, and then uses that session to register resources, request directory services, and request routing information.
Network nodes contain full APPN functionality. The CP in a network node is responsible for managing the resources of the network node along with the attached end nodes and LEN nodes. The CP establishes CP-CP sessions with adjacent end nodes and network nodes. It also maintains network topology and directory databases, which are created and updated by dynamically gathering information from adjacent network nodes and end nodes over CP-CP sessions. In an APPN environment, network nodes are connected by transmission groups (TGs), which in the current APPN architecture refers to a single link. Consequently, the network topology is a combination of network nodes and transmission groups.
For more background information on APPN, refer to the section Overview of APPN later in this article.
When to Use APPN as Part of a Network Design
APPN has two key advantages over other protocols:
- Native SNA routing
- COS for guaranteed service delivery
APPN, like Transmission Control Protocol/Internet Protocol (TCP/IP), is a routable protocol in which routing decisions are made at the network nodes. Although only the network node adjacent to the originator of the session selects the session path, every network node contributes to the process by keeping every other network node informed about the network topology. The network node adjacent to the destination also participates by providing detailed information about the destination. Only routers that are running as APPN network nodes can make routing decisions.
You need APPN in your network when a routing decision (for example, which data center or path) must be made. Figure: Determining where to use APPN in a network helps to illustrate the criteria you use to determine where APPN should be used in a network.
Figure: Determining where to use APPN in a network
In Figure: Determining where to use APPN in a network, a single link connects the branch office to the backbone. Therefore, a routing decision does not need to be made at the branch office. Consequently, an APPN network node might not be necessary at those sites.
Because there are two data centers, however, the routing decision about which data center to send the message to must be made. This routing decision can be made either at the data center or at the backbone routers. If you want this routing decision made at the data center, all messages are sent to a single data center using DLSw+, for example, and then routed to the correct data center using APPN only in the routers in the data center. If you want the routing decision to be made at the backbone routers, place the APPN network node in the backbone routers, where alternative paths are available for routing decisions outside of the data center. In this example, this latter approach is preferred because it isolates the function at the data centers routers to channel attachment, reduces the number of hops to the second data center, and provides a path to a backup data center if something catastrophic occurs.
Because APPN requires more memory and additional software, it is generally a more expensive solution. The advantages of direct APPN routing and COS, however, often offset the added expense. In this case, the added expense to add APPN to the backbone and data center routers might be justifiable, whereas added expense at the branch might not be justifiable.
APPN at Every Branch
There are two cases for which adding an APPN network node at every branch can be cost justified:
- When COS Is Required
- When Branch-to-Branch Routing Is Required
When COS Is Required
COS implies that the user accesses multiple applications and must be able to prioritize traffic at an application level. Although other priority schemes, such as custom queuing, might be able to prioritize by end user, they cannot prioritize between applications for an individual user. If this capability is critical, APPN network nodes must be placed in the individual branches to consolidate the traffic between multiple users using COS. For instance, COS can ensure that credit card verification always gets priority over batch receipts to a retail company's central site.
It is important to understand where COS is used in the network today. If the network is a subarea SNA network, COS is used only between front-end processors (FEPs) and ACF/VTAM on the mainframe. Unless there is already an FEP at the branch office, they do not have traffic prioritization from the branch, although traffic can be prioritized from the FEP out. In this case, adding an APPN network node at the branch office would prioritize the traffic destined for the data center sooner rather than waiting until it reaches the FEP-adding function over what is available today.
When Branch-to-Branch Routing Is Required
If branch-to-branch traffic is required, you can send all traffic to the central site and let those APPN network nodes route to the appropriate branch office. This is the obvious solution when both data center and branch-to-branch traffic are required and the branch is connected to the backbone over a single link. However, if a separate direct link to another branch is cost-justifiable, routing all traffic to the data center is unacceptable. In this case, making the routing decision at the branch is necessary. Using an APPN network node at the branch, data center traffic is sent over the data center link and branch-to-branch traffic is sent over the direct link.
In the example in Figure: Sample network for which branch-to-branch routing is required, each branch has two links to alternative routers at the data center. This is a case where APPN network nodes might be required at the branches so that the appropriate link can be selected. This can also be the design for branch-to-branch routing, adding a single hop rather than creating a full mesh of lines. This provides more direct routing than sending everything through the data center.
Figure: Sample network for which branch-to-branch routing is required
As you also learn in this article, scalability issues make it advantageous to keep the number of network nodes as small as possible. Understanding where native routing and COS is needed is key in minimizing the number of network nodes.
In summary, choosing where to implement APPN must be decided based on cost, scalability, and where native routing and COS are needed. Implementing APPN everywhere in your network might seem to be an obvious solution, even when not necessary. It must be understood, however, that if you were to deploy APPN everywhere in your network it probably would be a more costly solution than necessary and may potentially lead to scalability problems. Consequently, the best solution is to deploy APPN only where it is truly needed in your network.
When to Use APPN Versus Alternative Methods of SNA Transport
APPN and boundary network node (BNN)/boundary access node (BAN) over Frame Relay using RFC 1490 are the two methods of native SNA transport, where SNA is not encapsulated in another protocol. BAN and BNN allow direct connection to an FEP, using the Frame Relay network to switch messages, rather than providing direct SNA routing.
Although native might seem to be the appropriate strategy, APPN comes at the price of cost and network scalability, as indicated in the preceding section. With BNN/BAN additional cost is required to provide multiprotocol networking because the FEP does not handle multiple protocols. This implies that additional routers are required in the data center for other protocols and separate virtual circuits are required to guarantee service delivery for the SNA or APPN traffic.
DLSw+ provides encapsulation of SNA, where the entire APPN message is carried as data inside a TCP/IP message. There is often concern about the extra 40 bytes of header associated with TCP/IP. However, because Cisco offers alternatives such as Data Link Switching Lite, Fast Sequenced Transport (FST), and Direct Transport, which have shorter headers, header length is deemed noncritical to this discussion.
DLSw+ is attractive for those networks in which the end stations and data center will remain SNA-centric, but the backbone will be TCP/IP. This allows a single protocol across the backbone, while maintaining access to all SNA applications. DLSw+ does not provide native APPN routing, nor does it provide native COS. Consequently, DLSw+ is preferable for networks, in which cost is a key criterion, that have the following characteristics:
- A single data center or mainframe
- Single links from the branches
In general, DLSw+ is a lower-cost solution that requires less memory and software. In the vast majority of networks, DLSw+ will be combined with APPN-using APPN only where routing decisions are critical. With TCP/IP encapsulation, the TCP layer provides the same reliable delivery as SNA/APPN, but does not provide the native routing and COS.
TN3270 transports 3270 data stream inside a TCP/IP packet without SNA headers. Therefore, this solution assumes that the end station has only a TCP/IP protocol stack and no SNA. Therefore, TN3270 is not an alternative to APPN because APPN assumes the end station has an SNA protocol stack. APPN, like DLSw+, may still be required in the network to route between TN3270 servers and multiple mainframes or data centers.
In summary, APPN will frequently be used with DLSw+ in networks when a single backbone protocol is desired. BAN/BNN provides direct connectivity to the FEP but lacks the multiprotocol capabilities of other solutions. TN3270 is used only for TCP/IP end stations.
Overview of APPN
This section provides an overview of APPN and covers the following topics:
- Defining Nodes
- Establishing APPN Sessions
- Understanding Intermediate Session Routing
- Using Dependent Logical Unit Requester/Server
Nodes, such as ACF/VTAM, OS/400 and Communications Server/2 (CS/2), can be defined as either network nodes or end nodes.When you have a choice, consider the following issues:
- Network size-How large is the network? Building large APPN networks can introduce scalability issues. Reducing the number of network nodes is one solution for avoiding scalability problems. For more information on reducing the number of network nodes, see the section Reducing the Number of Network Nodes later in this article.
- Role of the node-Is it preferable to have this node performing routing functions as well as application processing? A separate network node can reduce processing cycles and memory requirements in an application processor.
Generally, you should define a network node whenever a routing decision needs to be made.
APPN Node Identifiers
An APPN node is identified by its network-qualified CP name, which has the format netid.name. The network identifier (netid) is an eight-character name that identifies the network or subnetwork in which the resource is located. The network identifier and name must be a combination of uppercase letters (A through Z), digits (0 through 9), and special characters ($,#,or @) but cannot have a digit as the first character.
Establishing APPN Sessions
In order for an APPN session to be established, the following must occur:
- The end user requests a session with an application, which causes the end node to begin the process of session establishment by sending a LOCATE message to its network node server. For session initiation, the network node server provides the path to the destination end node, which allows the originating end node to send messages directly to the destination.
- The network node uses directory services to locate the destination by first checking its internal directories. If the destination is not included in the internal directory, the network node sends a LOCATE request to the central directory server if one is available. If a central directory server is not available, the network node sends a LOCATE broadcast to the adjacent network nodes that in turn propagate the LOCATE throughout the network. The network node server of the destination returns a reply that indicates the location of the destination.
- Based on the location of the destination, the COS requested by the originator of the session, the topology database, and the COS tables, the network node server of the originator selects the least expensive path that provides the appropriate level of service.
- The originating network node server sends a LOCATE reply to the originating end node. The LOCATE reply provides the path to the destination.
- The originating end node is then responsible for initiating the session. A BIND is sent from the originating end node to the destination end node, requesting a session. After the destination replies to the BIND, session traffic can flow.
Understanding Intermediate Session Routing
Session connectors are used in place of routing tables in APPN. The unique session identifier and port from one side of the node are mapped to the unique session identifier and port on the other side. As data traffic passes through the node, the unique session identifier in the header is swapped for the outgoing identifier and sent out on the appropriate port, as shown in Figure: Intermediate session routing label swap.
Figure: Intermediate session routing label swap
This routing algorithm is called intermediate session routing (ISR). It supports dynamic route definition and incorporates the following legacy features:
- Node-to-node error and flow control processing-This reflects the 1970s method of packet switching in which many line errors dictated error and flow control at each node. Given the current high-quality digital facilities in many locations, this redundant processing is unnecessary and significantly reduces end-to-end throughput. End-to-end processing provides better performance and still delivers the necessary reliability.
- Disruptive session switching around network failures-Whenever a network outage occurs, all sessions using the path fail and have to be restarted to use an alternative path.
Because these features are undesirable in most high-speed networks today, a newer routing algorithm-High Performance Routing (HPR)-has been added to APPN that supports nondisruptive rerouting around failures and end-to-end error control, flow control, and segmentation. Cisco routers support both ISR and HPR.
Using Dependent Logical Unit Requester/Server
Dependent Logical Unit Requester/Server (DLUR/DLUS) is an APPN feature that allows legacy traffic to flow on an APPN network. Prior to the introduction of this feature, the APPN architecture assumed that all nodes in a network could initiate peer-to-peer traffic (for example, sending the BIND to start the session). Many legacy terminals that are referred to as Dependent Logical Units (DLUs) cannot do this and require VTAM to notify the application, which then sends the BIND.
Getting the legacy sessions initiated requires a client-server relationship between ACF/VTAM (Dependent LU server-DLUS) and the Cisco router (Dependent LU Requester-DLUR). A pair of logical unit (LU) type 6.2 sessions are established between the DLUR and DLUS-one session is established by each end point. These sessions are used to transport the legacy control messages that must flow to activate the legacy resources and initiate their logical unit to logical unit (LU-LU) sessions. An LU-LU session is the connection that is formed when the five steps described earlier in the section Establishing APPN Sessions are completed.
For example, an activate logical unit (ACTLU) message must be sent to the LU to activate a legacy LU. Because this message is not recognized in an APPN environment, it is carried as encapsulated data on the LU 6.2 session. DLUR then deencapsulates it, and passes it to the legacy LU. Likewise, the DLU session request is passed to the ACF/VTAM DLUS, where it is processed as legacy traffic. DLUS then sends a message to the application host, which is responsible for sending the BIND. After the legacy LU-LU session is established, the legacy data flows natively with the APPN traffic, as shown in Figure: DLU session processing.
Figure: DLU session processing
Cisco Implementation of APPN
This section provides an overview of Cisco's implementation of APPN and discusses where APPN resides in the Cisco IOS software. Cisco licensed the APPN source code from IBM and then ported it to the Cisco IOS software using network services from the data-link controls (DLCs).
Applications use APPN to provide network transport. APPN runs on top of the Cisco IOS software. APPN is a higher-layer protocol stack that requires network services from DLC. Cisco's APPN implementation is compliant with the APPN Architecture of record. When used with other features in the Cisco IOS software, APPN provides the following unique features:
- APPN can use DLSw+ or RSRB as a network transport, thereby supporting APPN over a native TCP/IP network.
- APPN can be used with downstream physical unit concentration (DSPU) to reduce the number of downstream PUs visible to VTAM. This reduces VTAM definition and network restart times.
- In addition to COS, priority queuing, custom queuing, and weighted fair queuing can be used with COS to ensure traffic prioritization and/or bandwidth reservation between protocols.
- Network management options are supported that include native SNA management services using Native Service Point (NSP) in the Cisco router, and Simple Network Management Protocol (SNMP) management using CiscoWorks Blue applications.
- Using Channel Interface Processor (CIP) or Channel Port Adapter (CPA), the Cisco APPN network node can interface directly with ACF/VTAM across the channel. VTAM can be defined either as an end node or network node.
As a single-network link state architecture, the network topology is updated as changes occur. This results in significant network traffic if instability occurs, and significant memory and processing to maintain the large topology databases and COS tables. Similarly, in large networks, dynamic discovery of resources can consume significant bandwidth and processing. For these reasons, scalability becomes a concern as network size increases. The number of nodes that are too large depends on the following:
- Amount of traffic
- Network stability
- The number of the techniques, which are described in this section, that are being used to control traffic and processing
Essentially, to allow growth of APPN networks, the network design must focus on reducing the number of topology database updates (TDUs) and LOCATE search requests.
Topology Database Update Reduction
APPN is a link-state protocol. Like other link-state-based algorithms, it maintains a database of the entire topology information of the network. Every APPN network node in the network sends out TDU packets that describe the current state of all its links to its adjacent network nodes. The TDU contains information that identifies the following:
- The characteristics of the sending node
- The node and link characteristics of the various resources in the network
- The sequence number of the most recent update for each described resource
A network node that receives a TDU packet propagates this information to its adjacent network nodes using a flow reduction technique. Each APPN network node maintains full knowledge of the network and how the network is interconnected. Once a network node detects a change to the network (either a change to the link, or the node), it floods TDUs throughout the network to ensure rapid convergence. If there is an unstable link in the network, it can potentially cause many TDU flows in a network.
As the number of network nodes and links increases, so does the number of TDU flows in your network. This type of distributing topology can consume significant CPU cycles, memory, and bandwidth. Maintaining routes and a large, complete topology subnet can require a significant amount of dynamic memory.
You can use the following techniques to reduce the amount of TDU flows in the network:
- Reduce the number of links
- Reduce the number of CP-CP sessions
- Reduce the number of network nodes in the network
Reducing the Number of Links
The first technique for reducing the amount of TDU flows in the network is to reduce the number of links in your network. In some configurations, it might be possible to use the concept of connection network to reduce the number of predefined links in your network. Because network nodes exchange information about their links, the fewer links you define, the fewer TDU flows can occur.
Figure: Physical view of an APPN network shows the physical view of an APPN network. In this network NN1, NN2, and NN3 are routers attached to an FDDI LAN.
Figure: Physical view of an APPN network
The network-node server (NNS), EN1, and EN2 hosts are attached to the same FDDI LAN via a CIP router or a cluster controller. These nodes on the FDDI LAN have any-to-any connectivity. To reflect any-to-any connectivity in APPN, NN1 needs to define a link to NN2, NN3, NNS (VTAM host), EN1 (VTAM data host), and EN2 (EN data host). The transmission groups connecting network nodes are contained in the network topology database. For every link that is defined to the network node, TDUs are broadcast.
Figure: Logical view of an APPN network without connection network deployed shows the logical view of the APPN network, shown earlier in Figure: Physical view of an APPN network. When NN1 first joins the network, NN1 activates the links to NN2, NN3, NNS, EN1, and EN2. CP-CP sessions are established with the adjacent network nodes. Each adjacent network node sends a copy of the current topology database to NN1. Similarly, NN1 creates a TDU about itself and its links to other network nodes and sends this information over the CP-CP sessions to NN2, NN3 and NNS. When NN2 receives the TDU from NN1, it forwards the TDU to its adjacent network nodes, which are NN3 and NNS. Similarly, NN3 and NNS receive the TDU from NN1 and broadcast this TDU to their adjacent network nodes. The result is that multiple copies of the TDU are received by every network node.
Figure: Logical view of an APPN network without connection network deployed
The transmission groups that connect the end nodes are not contained in the network topology database. Consequently, no TDUs are broadcast for the two links to EN1 and EN2. If the number of transmission groups connecting network nodes can be reduced, the number of TDU flows can also be reduced.
By using the concept of connection networks, you can eliminate the transmission group definitions, and therefore reduce TDU flows. A connection network is a single virtual routing node (VRN), which provides any-to-any connectivity for any of its attached nodes. The VRN is not a physical node, it is a logical entity that indicates that nodes are using a connection network and a direct routing path can be selected.
Figure: Logical view of an APPN network with connection network deployed shows the APPN network shown in Figure: Physical view of an APPN network with connection network deployed.
Figure: Logical view of an APPN network with connection network deployed
NN1, NN2, and NN3 define a link to the network-node server (NNS) and a link to the VRN. When the link between NN1 and NNS is activated, NNS sends a copy of the current network topology database to NN1. NN1 creates a TDU about itself, its link to NNS, and its link to the VRN. It then sends this information to NNS. NN1 does not have a link defined to NN2 and NN3, therefore, there are no TDUs sent to NN2 and NN3 from NN1. When NNS receives the TDU information from NN1, NNS forwards it to NN2 and NN3. Neither NN2 nor NN3 forwards the TDU information because they only have a connection to NNS. This significantly reduces the number of TDU flows in the network.
When a session is activated between resources on the connection network, the network-node server recognizes that this is a connection network and selects a direct route rather than routing through its own network nodes. Cisco recommends that you apply the concept of connection networks whenever possible. Not only does it reduce the number of TDU flows in the network, it also greatly reduces system definitions.
As shown in the example, a LAN (Ethernet, Token Ring, or FDDI) can be defined as a connection network. With ATM LAN Emulation (LANE) services, you can interconnect ATM networks with traditional LANs. From APPN's perspective, because an ATM-emulated LAN is just another LAN, connection network can be applied. In addition to LANs, the concept of connection networks can apply to X.25, Frame Relay, and ATM networks. It should also be noted that technologies such as RSRB and DLSw appear as LANs to APPN. You can also use connection network in these environments. APPN, in conjunction with DLSw+ or RSRB, provides a synergy between routing and bridging for SNA traffic.
Reducing the Number of CP-CP Sessions
The second technique for reducing the amount of TDU flows in the network is to reduce the number of CP-CP sessions in your network. Network nodes exchange topology updates over CP-CP sessions. The number of CP-CP sessions has a direct impact on the number of TDU flows in the network.
For example, in Figure: Fully meshed CP-CP sessions, NN2, NN3, NN4, and NN5 are in a fully meshed network. Every network node establishes CP-CP sessions with its adjacent network nodes. This means that NN2 establishes CP-CP sessions with NN3, NN4, and NN5. NN3 establishes CP-CP sessions with NN2, NN4, NN5, and so forth.
Figure: Fully meshed CP-CP sessions
If the link fails between NN1 and NN2, TDU updates are broadcast from NN2 to NN3, NN4, and NN5. When NN3 receives the TDU update, it resends this information to NN4 and NN5. Similarly, when NN5 receives the TDU update, it resends this information to NN3 and NN4. This means that NN4 receives the same information three times. It is recommended that the number of CP-CP sessions are kept to a minimum so that duplicate TDU information will not be received.
In Figure: Single pair of CP-CP sessions, CP-CP sessions exist only between NN2 and NN3, NN2 and NN4, and NN2 and NN5; no other CP-CP sessions exist. When the link fails between NN1 and NN2, NN2 broadcasts transmission group updates to NN3, NN4, and NN5. None of the three NNs forwards this information to the rest of the network because CP-CP sessions do not exist. Although this minimizes the TDU flows, if the link between NN2 and NN3 fails, this becomes a disjointed APPN network and NN3 is isolated.
Figure: Single pair of CP-CP sessions
Figure: Dual pair of CP-CP sessions shows a more efficient design that also provides redundancy. Every network node has CP-CP sessions with two adjacent network nodes. NN2 has CP-CP sessions with NN3 and NN5. If the link between NN2 and NN3 fails, TDU updates will be sent via NN5 and NN4.
For redundancy purposes, it is recommended that each network node has CP-CP sessions to two other network nodes if possible.
Reducing the Number of Network Nodes
The third technique for reducing the amount of TDU flows in the network is to reduce the number of network nodes by defining APPN nodes only at the edges of the network. Minimizing the number of network nodes also reduces the size of the network topology. The following are some technologies for reducing the number of network nodes:
- APPN over DLSw+
- APPN over Frame Relay Access Server (FRAS)/BNN or BAN
- APPN over RSRB
Figure: Dual pair of CP-CP sessions
APPN over DLSw+
Data link switching is one way to reduce the number of network nodes in the network. DLSw+ is a means of transporting APPN traffic across a WAN, where APPN network nodes and/or end nodes are defined only at the edges of the network. Intermediate routing is through DLSw+ and not via native SNA.
DLSw+ defines a standard to integrate SNA/APPN and LAN internetworks by encapsulating these protocols within IP. Cisco's implementation of DLSw, known as DLSw+, is a superset of the current DLSw architecture. DLSw+ has many value-added features that are not available in other vendors' DLSw implementations. APPN, when used with DLSw, can benefit from the many scalability enhancements that are implemented in DLSw+, such as border peer, on-demand peers, caching algorithms, and explorer firewalls.
In Figure: APPN with DLSw+, sessions between end-node workstations and the host are transported over the DLSw+ network.
Figure: APPN with DLSw+
VTAM acts as the network-node server for remote end-node workstations. Optionally, if multiple VTAMs or data centers exist, APPN on the channel-attached router(s) or on other routers in the data center can offload VTAM by providing the SNA routing capability, as shown in Figure: APPN with DLSw+ using a channel-attached router.
Figure: APPN with DLSw+ using a channel-attached router
DLSw+ also brings nondisruptive rerouting in the event of a WAN failure. Using DLSw+ as a transport reduces the number of network nodes in the network. A disadvantage is that remote end-node workstations require WAN connections for NNS services. Another disadvantage is that without APPN in the routers, APPN transmission priority is lost when traffic enters the DLSw+ network.
For detailed information on DLSw and DLSw+, refer to Designing DLSw+ Internetworks.
APPN over FRAS BNN/BAN
If the APPN network is based on a Frame Relay network, one option is to use the FRAS/BNN or the Frame Relay BAN function for host access. Both BNN and BAN allow a Cisco router to attach directly to an FEP. When you use FRAS/BNN, you are assuming that the Frame Relay network is performing the switching and that native routing is not used within the Frame Relay network. For an example of how APPN with FRAS BNN/BAN can be used in your network design, see the section Example of APPN with FRAS BNN later in this article.
APPN over RSRB
Using RSRB, the SNA traffic can be bridged from a remote site to a data center. The use of RSRB significantly reduces the total number of network nodes in the network, thus reducing the number of TDU flows in the network. Another advantage of using RSRB is that it provides nondisruptive routing in the event of a link failure. For more information on using RSRB, refer to Designing SRB Internetworks.
LOCATE Search Reduction
This section describes the broadcast traffic in an APPN network and how LOCATE searches can become a scalability issue in an APPN network. The impact of LOCATE searches in an APPN network varies from one network to the other. This section first identifies some of the causes of an excessive number of LOCATE searches, and then discusses the following four techniques you can use to minimize them:
- Safe-Store of Directory Cache
- Partial Directory Entries
- Central Directory Server (CDS)/Client
- Central Resource Registration
An APPN network node provides dynamic location of network resources. Every network node maintains dynamic knowledge of the resources in its own directory database. The distributed directory database contains a list of all the resources in the network. The LOCATE search request allows one network node to search the directory database of all other network nodes in the network.
When an end-node resource requests a session with a target resource that it has no knowledge of, it uses the distributed search capabilities of its network-node server to locate the target resource. If the network node does not have any knowledge of the target resource, the network node forwards the locate search request to all its adjacent network nodes requesting these nodes to assist the network-node server to locate the resource. These adjacent network nodes propagate these locate search requests to their adjacent network nodes. This search process is known as broadcast search.
Although several mechanisms are put into place to reduce the LOCATE broadcast searches (for example, resource registration, and resource caching), there might still be an excessive amount of LOCATE flows in a network for such reasons as the network resources no longer exist, there is a mixture of subarea networks and APPN networks, or the resources are temporarily unavailable.
Safe-Store of Directory Cache
The first technique that you can use to minimize the LOCATE flows in your APPN network is the Safe-Store of Directory Cache, which is supported by the Cisco network-node implementation. Cache entries in a network node's directory database can be periodically written to a permanent storage medium: a tftp host. This speeds recovery after a network-node outage or initial power loss. Resources do not have to be relearned through a LOCATE broadcast search after a router failure. This reduces spikes of broadcasts that might otherwise occur when the APPN network is restarted.
Partial Directory Entries
The second technique that you can use to minimize the LOCATE flows in your APPN network is to define the resources in the local directory database by identifying the end node or network node where the particular resource is located. The following is a sample configuration:
appn partner-lu-location CISCO.LU21 owning-cp CISCO.CP2 complete
The preceding example defines the location of an LU named CISCO.LU21 that is located with end node or network node CISCO.CP2. This command improves network performance by allowing directed Locate, instead of a broadcast. The disadvantage is that definitions must be created. To alleviate this definition problem, it may be possible to use partially specified names to define multiple resources.
The following is a sample configuration:
Sample configuration: appn partner-lu-location CISCO.LU owning-cp CISCO.CP2 wildcard complete
The preceding example defines the location of all the LUs prefixed with the characters LU. Obviously, a naming convention is essential to the success of this type of node definition.
Central Directory Server (CDS)/Client
The third technique that you can use to minimize the LOCATE flows in your APPN network is to use the CDS/client function. The APPN architecture specifies a CDS that allows a designated network node to act as a focal point for locating network resources. In current APPN networks, every network node can potentially perform a broadcast search for a resource. This is because the directory services database is not replicated on every network node.
The CDS function allows a network node, with central directory client support, to send a directed LOCATE search to a CDS. If the CDS has no knowledge of the resource, it performs one broadcast search to find the resource. After the resource is found, the CDS caches the results in its directory. Subsequently, the CDS can provide the location of the resource to other network nodes without performing another broadcast search. The Cisco network-node implementation supports the central directory client function. VTAM is the only product that currently implements the CDS function.
Using the CDS means that there is a maximum of one broadcast search per resource in the network. This significantly reduces the amount of network traffic used for resource broadcast searching. You can define multiple CDSs in an APPN network. A network node learns the existence of a CDS via TDU exchange. If more than one CDS exists, the nearest one is used based on the number of hop counts. If a CDS fails, the route to the nearest alternative CDS is calculated automatically.
Central Resource Registration
The fourth technique that you can use to minimize the LOCATE flows in your APPN network is to use the central resource registration function. An end node registers its local resources at its network-node server. If every resource is registered, all network nodes can query the CDS, which eliminates the need for broadcast searches.
Backup Techniques in an APPN Network
This section provides an overview of the various backup techniques in APPN network. The backup and recovery scenarios are representative of common environments and requirements. The following three backup scenarios are discussed:
- A secondary WAN link as a backup to a primary WAN link
- Dual WAN links and dual routers providing full redundancy
- APPN DLUR backup support using a Cisco CIP router
The first backup technique that you can use in your APPN network is to use a secondary WAN link as a backup to your primary WAN link. By using the concept of auto-activation on demand, you can back up a primary WAN link with a secondary WAN link by using any supported protocols (for example, Point-to-Point [PPP], Switched Multimegabit Data Service [SMDS], and X.25), as shown in Figure: Link backup.
Figure: Link backup
In Figure: Link backup, the Frame Relay link is the primary link and the ISDN dial link is the backup link. The requirement is that the ISDN link provides instantaneous backup for the primary link and it remains inactive until the primary link goes down. No manual intervention is needed. To support this, NNA needs to define two parallel transmission groups to NNB.
The primary link is defined using the following configuration command:
appn link-station PRIMARY port FRAME_RELAY fr-dest-address 35 retry-limit infinite complete
The secondary link is defined as supporting auto-activation using the following configuration command:
appn link-station SECONDARY port PPP no connect-at-startup adjacent-cp-name NETA.NNB activate-on-demand complete
By specifying no connect-at-startup, the secondary link is not activated upon APPN node startup. To indicate auto-activation support, specify adjacent-cp-name and activate-on-demand.
When the primary link fails, APPN detects the link failure and CP-CP sessions failure, which is disruptive to any existing LU-LU sessions. Because there are multiple links from NNA to NNB, NNA attempts to re-establish the CP-CP sessions over the secondary link. The CP-CP sessions request will activate the secondary dial link automatically.
To ensure that the Frame Relay link is used as primary and the dial PPP link is used as the backup, define the transmission group characteristics to reflect that. For example, use the cost-per-connect-time parameter to define the relative cost of using the dial PPP/ISDN link.
This will make the primary Frame Relay link a lower cost route. Therefore, it is a more desirable route than the secondary dial link because the default cost-per-connect-time is zero. When the primary link becomes active, there is no mechanism in place to automatically switch the sessions back to the primary link. Manual intervention is required.
The second backup technique that you can use in your APPN network is dual WAN links and dual routers for full redundancy. In some cases, for example, complete fault tolerance is required for mission-critical applications across the network. You can have dual routers and dual links installed to provide protection against any kind of communications failure.
Figure: Full redundancy shows how you can use duplicate virtual MAC addresses via RSRB to provide full redundancy and load sharing.
Figure: Full redundancy
The router configuration for NNC is as follows:
source-bridge ring-group 200 ! interface TokenRing0 ring-speed 16 source 100 1 200 ! appn control-point NETA.NNC complete ! appn port RSRB rsrb rsrb-virtual-station 4000.1000.2000 50 2 200 complete
The router configuration for NND is as follows:
source-bridge ring-group 300 ! interface TokenRing0 ring-speed 16 source 100 5 300 ! appn control-point NETA.NND complete ! appn port RSRB rsrb rsrb-virtual-station 4000.1000.2000 60 3 300 complete
Both NNC and NND define an RSRB port with the same virtual MAC address. Every workstation will define the RSRB virtual MAC address as its destination MAC address of its network-node server. Essentially, a workstation can use either NNC or NND as its network-node server, depending on which node answers the test explorer frame first. The route to NNC consists of the following routing information:
Ring 100 -> Bridge 1 -> Ring 200 -> Bridge 2 -> Ring 50
Route to NND will consist of the following routing information:
Ring 100 -> Bridge 5 -> Ring 300 -> Bridge 3 -> Ring 60
When NND fails, sessions on NND can be re-established over NNC instantaneously. This is analogous to the duplicate Token Ring interface coupler (TIC) support on the FEP except that no hardware is required. In Cisco's RSRB implementation, as shown in Figure: Full redundancy, Segment 20 and Bridge 1, and Segment 30 and Bridge 2 are virtual. Duplicate MAC address can be supported without the hardware in place.
The third backup technique is to use APPN DLUR with a Cisco CIP router to support transfer of resource ownership from one System Services Control Point (SSCP) (VTAM) to another when a failure occurs. This includes maintaining existing sessions over the failure. DLUS/DLUR can provide the capability to transfer SSCP ownership from the primary SSCP to the backup SSCP. It then examines how DLUR can provide the capability to obtain SSCP services from the backup SSCP without terminating LU-LU sessions that are in progress.
Figure: SSCP takeover with APPN and CIP illustrates how the FEP can be replaced with a CIP router running CIP SNA (CSNA).
Figure: SSCP takeover with APPN and CIP
In this example, VTAMA is the primary DLUS, VTAMB is the backup DLUS, and CIP router is configured as the DLUR. Assume that LUA requests to log on to an application that is residing on VTAMB. When VTAMA and the DLUS to DLUR connections fail, the DLUR node attempts to establish a session with VTAMB, which is configured as backup DLUS. When the control sessions to the DLUS are active, the DLUR node notifies VTAMB about all the active downstream physical and logical units. VTAMB sends active physical unit (ACTPU) and active logical unit (ACTLU) commands to these downstream devices. This transfers the resource ownership from VTAMA to VTAMB.
After the SSCP-PU and SSCP-LU sessions are re-established with VTAMB, new LU-LU sessions are possible. In addition, the DLUR node notifies VTAMB about all the dependent logical units that have active sessions.
The LU-LU path between VTAMB and LUA would be VTAMB -> NNB -> NNA -> LUA. When VTAMA fails, LU-LU sessions are not disrupted because VTAMA is not part of the LU-LU session path. In fact, LUA has no knowledge that the owning SSCP (VTAMA) failed and a new SSCP became the new owner. This process is transparent to LUA.
APPN in a Multiprotocol Environment
The trend in internetworking is to provide network designers with greater flexibility in building multiprotocol networks. Cisco provides the following two mechanisms to transport SNA traffic over an internetwork:
- Natively via APPN
The key to building multiprotocol internetworks is to implement some kind of traffic priority or bandwidth reservation to ensure acceptable response time for mission-critical traffic while maintaining some internetworking resource for less delay-sensitive traffic.
Bandwidth Management and Queuing
The following are some Cisco bandwidth management and queuing features that can enhance the overall performance of your network:
- Priority queuing
- Custom queuing
- Weighted fair queuing
- APPN buffer and memory management
For many years, the mainframe has been the dominant environment for processing business-critical applications. Increasingly powerful intelligent workstations, the creation of client-server computing environments, and higher bandwidth applications are changing network topologies. With the proliferation of LAN-based client-server applications, many corporate networks are migrating from purely hierarchical SNA-based networks to all-purpose multiprotocol internetworks that can accommodate the rapidly changing network requirements. This is not an easy transition. Network designers must understand how well the different protocols use shared network resources without causing excessive contentions among them.
Cisco has for many years provided technologies that encapsulate SNA traffic and allow consolidation of SNA with multiprotocol networks. APPN on the Cisco router provides an additional option in multiprotocol internetworks where SNA traffic can now flow natively and concurrently with other protocols. Regardless of the technology used in a multiprotocol environment, network performance is the key consideration.
Some of the major factors affecting network performance in a multiprotocol environment are as follows:
- Media access speed-The time it takes for a frame to be sent over a link. The capacity requirement of the network must be understood. Insufficient network capacity is the primary contributor to poor performance. Whether you have a single protocol network or a multiprotocol network, sufficient bandwidth is required.
- Congestion control-The router must have sufficient buffering capacity to handle instantaneous bursts of data. In order to support a multiprotocol environment, buffer management plays an important role to ensure that one protocol does not monopolize the buffer memory.
- Latency in the intermediate routers-This includes packet processing time while traversing a router and queuing delay. The former constitutes a minor part of the total delay. The latter is the major factor because client-server traffic is bursty.
Typically, subarea SNA traffic is highly predictable and has low bandwidth requirements. Compared to SNA traffic, client-server traffic tends to be bursty in nature and has high bandwidth requirements. Unless there is a mechanism in place to protect mission-critical SNA traffic, network performance could be impacted.
Cisco provides many internetworking solutions to enterprise networks by allowing the two types of traffic with different characteristics to coexist and share bandwidth; at the same time providing protection for mission-critical SNA data against less delay-sensitive client-server data. This is achieved through the use of several priority queuing and/or bandwidth reservation mechanisms.
For example, interface priority output queuing provides a way to prioritize packets transmitted on a per interface basis. The four possible queues associated with priority queuing-high, medium, normal and low-are shown in Figure: Four queues of priority queuing. Priorities can be established based upon the protocol type, particular interface, SDLC address, and so forth.
Figure: Four queues of priority queuing
In Figure: Example of custom queuing, SNA, TCP/IP, NetBIOS and other miscellaneous traffic are sharing the media. The SNA traffic is prioritized ahead of all other traffic, followed by TCP/IP, and then NetBIOS, and finally other miscellaneous traffic. There is no aging algorithm associated with this type of queuing. Packets that are queued to the high priority queue are always serviced prior to the medium queue, the medium queue is always serviced before the normal queue, and so forth.
Priority queuing, however, introduces a fairness problem in that packets classified to lower priority queues might not get serviced in a timely manner, or at all. Custom queuing is designed to address this problem. Custom queuing allows more granularity than priority queuing. In fact, this feature is commonly used in the internetworking environment in which multiple higher-layer protocols are supported. Custom queuing reserves bandwidth for a specific protocol, thus allowing mission-critical traffic to receive a guaranteed minimum amount of bandwidth at any time.
The intent is to reserve bandwidth for a particular type of traffic. For example, in Figure: Example of custom queuing, SNA has 40 percent of the bandwidth reserved using custom queuing, TCP/IP 20 percent, NetBIOS 20 percent, and the remaining protocols 20 percent. The APPN protocol itself has the concept of COS that determines the transmission priority for every message. APPN prioritizes the traffic before sending it to the DLC transmission queue.
Figure: Example of custom queuing
Custom queuing prioritizes multiprotocol traffic. A maximum of 16 queues can be built with custom queuing. Each queue is serviced sequentially until the number of bytes sent exceeds the configurable byte count or the queue is empty. One important function of custom queuing is that if SNA traffic uses only 20 percent of the link, the remaining 20 percent allocated to SNA can be shared by the other traffic.
Custom queuing is designed for environments that want to ensure a minimum level of service for all protocols. In today's multiprotocol internetwork environment, this important feature allows protocols of different characteristics to share the media. For an overview of how to use the other types of queuing to allow multiple protocols to coexist within a router, review Internetworking Design Basics.
Other Considerations with a Multiprotocol Environment
The memory requirement to support APPN is considerably higher than other protocols because of its large COS tables, network topology databases, and directory databases. To ensure that APPN will coexist with other network protocols when operating in a multiprotocol environment, users can define the maximum amount of memory available to APPN. The following is the sample configuration command.
appn control-point CISCONET.EARTH maximum-memory 16 complete
The preceding command specifies that APPN will not use more than 16 megabytes (MB) of memory. The memory is then managed locally by APPN. You can also specify the amount of memory reserved for APPN by using the following command:
appn control-point CISCONET.EARTH mimimum-memory 32 complete
|Note:||Memory that is dedicated to APPN is not available for other processing. Use this command with caution.|
Although memory determines factors such as the number of sessions that APPN can support, buffer memory is required to regulate traffic sent to and from the router. To ensure that APPN has adequate buffers to support the traffic flows, you can define the percentage of buffer memory that is reserved for use by APPN. This prevents APPN from monopolizing the buffer memory available in the router.
The following is the sample configuration command.
appn control-point CISCONET.EARTH buffer-percent 60 complete
APPN uses a statistical buffering algorithm to manage the buffer usage. When buffer memory is constrained, APPN uses various flow control mechanisms to protect itself from severe congestion or deadlock conditions as a result of buffer shortage.
As networks grow in size and complexity, there are many ways to provide network management for an enterprise. Table: Network Management Tools Available for APPN Networks summarizes Cisco's management products.
Table: Network Management Tools Available for APPN Networks
A common challenge in APPN networks is to understand the topology and status of the resources in the network. Show commands take advantage of the fact that all network nodes in a network (or subnetwork) have a fully replicated network topology database. Only a single network node is required to get a view of the APPN subnet, and it should not matter which network node is chosen. In order to obtain more detailed information, such as attached end nodes and LEN nodes, and local ports and link stations, additional network nodes should be checked.
CiscoWorks Blue Maps
A CiscoWorks application that shows logical maps of APPN, RSRB, and DLSw+ networks. It runs on the HP/UX, SunOS, and AIX operating systems. The APPN map is a manager for APPN SNMP agents, and displays the APPN network. The application can handle only a single network topology agent. If there are multiple subnets, the application can be started multiple times.
Native Service Point (NSP)
In SNA, a session between an SSCP and a PU is referred to as an SSCP-PU session. SSCPs use SSCP-PU sessions to send requests and receive status information from individual nodes. This information is then used to control the network configuration.
Alerts and Traps
NetView is the primary destination of alerts. It supports receiving alerts from both APPN and on the SSCP-PU session used by NSP. The Cisco router can send alerts on each session. At this time, two sessions are required: one for APPN-unique alerts and one for all other alerts. The new APPN MIB allows for APPN alerts to be sent as traps as well, with the Alert ID and affected resource included in the trap.
FOCALPT CHANGE, FPCAT=ALERT, TARGET=NETA.ROUTER
This section provides the following APPN network configuration examples:
- Simple APPN network
- APPN network with end stations
- APPN over DLSw+
It also provides the following examples of using APPN when designing your network:
- Subarea to APPN migration
- APPPN/CIP in a Sysplex environment
- APPN with FRAS BNN
As the following examples show, the minimal configuration for an APPN node includes an APPN control-point statement for the node and a port statement for each interface.
Simple APPN Network Configuration
Figure: Example of a simple APPN network configuration shows an example of a simple APPN network that consists of four network nodes: Routers A, B, C, and D. Router A is responsible for initiating the connections to Routers B, C, and D. Consequently, it needs to define APPN logical links specifying the FDDI address of Router C, the ATM address of Router D, and so forth. For Routers B, C, and D, they can dynamically create the link-station definitions when Router A connects.
Figure: Example of a simple APPN network configuration
This section provides sample configurations for each of these four network nodes (Routers A, B, C, and D) shown in Figure: Example of a simple APPN network configuration.
Router A Configuration
The following is a sample configuration for Router A shown in Figure: Example of a simple APPN network configuration. Note that all link stations are defined in Router A and dynamically discovered by the other routers. A link station connects two resources and must be defined with the destination address in one of the resources:
! hostname routera ! interface Serial0 ip address 10.11.1.1 255.255.255.0 encapsulation ppp no keepalive no fair-queue clockrate 4000000 ! interface Fddi0 no ip address no keepalive ! interface ATM0 no ip address atm clock INTERNAL atm pvc 1 1 32 aal5nlpid ! appn control-point CISCONET.ROUTERA complete ! appn port PPP Serial0 complete ! appn port FDDI Fddi0 desired-max-send-btu-size 3849 max-rcv-btu-size 3849 complete ! appn port ATM ATM0 complete ! appn link-station LINKTOB port PPP complete ! appn link-station LINKTOC port FDDI lan-dest-address 0000.6f85.a8a5 no connect-at-startup retry-limit infinite 5 complete ! appn link-station LINKTOD port ATM atm-dest-address 1 no connect-at-startup retry-limit infinite 5 complete !
Router B Configuration
The following is a sample configuration for Router B shown in Figure: Example of a simple APPN network configuration:
! hostname routerb ! interface Serial1 ip address 10.11.1.2 255.255.255.0 encapsulation ppp no keepalive no fair-queue ! appn control-point CISCONET.ROUTERB complete ! appn port PPP Serial1 complete ! appn routing ! end
Router C Configuration
The following is a sample configuration for Router C shown in Figure: Example of a simple APPN network configuration:
! hostname routerc ! interface Fddi0 no ip address no keepalive ! appn control-point CISCONET.ROUTERC complete ! appn port FDDI Fddi0 desired-max-send-btu-size 3849 max-rcv-btu-size 3849 complete ! appn routing ! end
Router D Configuration
The following is a sample configuration for Router D shown in Figure: Example of a simple APPN network configuration:
! hostname routerd ! interface ATM0 ip address 22.214.171.124 255.255.255.0 atm pvc 1 1 32 aal5nlpid ! appn control-point CISCONET.ROUTERD complete ! appn port ATM ATM0 complete ! appn routing ! end
APPN Network Configuration with End Stations
Figure: Example of an APPN network with end stations shows an example of an APPN network with end stations. At the remote location, Router B initiates the APPN connection to Router A at the data center.
Figure: Example of an APPN network with end stations
This section provides sample configurations for Routers A, B, and C shown in Figure: Example of an APPN network with end stations.
Sample Configuration for Router A
The following is a sample configuration for Router A in Figure: Example of an APPN network with end stations, which is responsible for initiating the APPN connection to the VTAM host:
hostname routera ! interface TokenRing0 no ip address mac-address 4000.1000.1000 ring-speed 16 ! interface Serial0 mtu 4096 encapsulation frame-relay IETF keepalive 12 frame-relay lmi-type ansi frame-relay map llc2 35 ! appn control-point CISCONET.ROUTERA complete ! appn port FR0 Serial0 complete ! appn port TR0 TokenRing0 complete ! appn link-station TOVTAM port TR0 lan-dest-address 4000.3745.0000 complete ! end
Sample Configuration for Router B
The following is a sample configuration for Router B shown in Figure: Example of an APPN network with end stations. At the remote location, Router B initiates the APPN connection to Router A at the data center and EN AS/400. Because a link station is not defined in Router B for CISCONET.ENCM2B, a link station must be defined in ENCM2B for Router B:
!hostname routerb ! interface TokenRing0 mac-address 4000.1000.2000 no ip address ring-speed 16 ! interface Serial0 mtu 4096 encapsulation frame-relay IETF keepalive 12 frame-relay lmi-type ansi frame-relay map llc2 35 ! interface Serial2/7 no ip address encapsulation sdlc no keepalive clockrate 19200 sdlc role prim-xid-poll sdlc address 01 ! appn control-point CISCONET.ROUTERB complete ! appn port FR0 Serial0 complete ! appn port SDLC Serial1 sdlc-sec-addr 1 complete ! appn port TR0 TokenRing0 complete ! appn link-station AS400 port SDLC role primary sdlc-dest-address 1 complete ! appn link-station ROUTERA port FR0 fr-dest-address 35 complete ! end
Sample Configuration for Router C
The following is a sample configuration for Router C shown in Figure: Example of an APPN network with end stations. Router C initiates an APPN connection to Router A. Because there is not a link station for CISCONET.ENCMC2C, one must be defined in the configuration for ENCM2C:
hostname routerc ! interface TokenRing0 mac-address 4000.1000.3000 no ip address ring-speed 16 ! interface Serial0 mtu 4096 encapsulation frame-relay IETF keepalive 12 frame-relay lmi-type ansi frame-relay map llc2 36 ! appn control-point CISCONET.ROUTERC complete ! appn port FR0 Serial0 complete ! appn port TR0 TokenRing0 complete ! appn link-station TORTRA port FR0 fr-dest-address 36 complete ! end
APPN over DLSw+ Configuration Example
Figure: Example of APPN with DLSw+ shows an example of APPN with DLSw+. ROUTER A is a DLSw+ router with no APPN functions and ROUTERB is running DLSw+ and APPN.
Figure: Example of APPN with DLSw+
Sample Configurations of DLSw+ Router A
The following section provides sample configurations for ROUTERA and ROUTERB and the two workstations shown in Figure: Example of APPN with DLSw+.
Sample Configuration of DLSw+ ROUTERA
The following is a sample configuration for the DLSw+ ROUTERA shown in Figure: Example of APPN with DLSw+:
hostname routera ! source-bridge ring-group 100 dlsw local-peer peer-id 10.4.21.3 dlsw remote-peer 0 tcp 10.4.21.1 ! interface Serial0 mtu 4096 ip address 10.4.21.3 255.255.255.0 encapsulation frame-relay IETF keepalive 12 no fair-queue frame-relay lmi-type ansi frame-relay map llc2 56 ! interface TokenRing0 ip address 10.4.22.2 255.255.255.0 ring-speed 16 multiring all source-bridge 5 1 100 !
Sample Configuration for Workstation Attached to ROUTERA
The following is a sample CS/2 configuration for the OS/2 workstation named CISCONET.ENCM2A shown in Figure: Example of APPN with DLSw+. This workstation is attached to the DLSw+ router named ROUTERA. The workstation is configured as an end node and it uses ROUTERB as the network-node server. The destination MAC address configured on this workstation is the virtual MAC address configured in ROUTERB on the appn port statement. A sample of the DLSw+ ROUTERB configuration is provided in the next section.
DEFINE_LOCAL_CP FQ_CP_NAME(CISCONET.ENCM2A) CP_ALIAS(ENCM2C) NAU_ADDRESS(INDEPENDENT_LU) NODE_TYPE(EN) NODE_ID(X'05D00000') NW_FP_SUPPORT(NONE) HOST_FP_SUPPORT(YES) MAX_COMP_LEVEL(NONE) MAX_COMP_TOKENS(0); DEFINE_LOGICAL_LINK LINK_NAME(TORTRB) ADJACENT_NODE_TYPE(LEARN) PREFERRED_NN_SERVER(YES) DLC_NAME(IBMTRNET) ADAPTER_NUMBER(0) DESTINATION_ADDRESS(X'400010001112') ETHERNET_FORMAT(NO) CP_CP_SESSION_SUPPORT(YES) SOLICIT_SSCP_SESSION(YES) NODE_ID(X'05D00000') ACTIVATE_AT_STARTUP(YES) USE_PUNAME_AS_CPNAME(NO) LIMITED_RESOURCE(NO) LINK_STATION_ROLE(USE_ADAPTER_DEFINITION) MAX_ACTIVATION_ATTEMPTS(USE_ADAPTER_DEFINITION) EFFECTIVE_CAPACITY(USE_ADAPTER_DEFINITION) COST_PER_CONNECT_TIME(USE_ADAPTER_DEFINITION) COST_PER_BYTE(USE_ADAPTER_DEFINITION) SECURITY(USE_ADAPTER_DEFINITION) PROPAGATION_DELAY(USE_ADAPTER_DEFINITION) USER_DEFINED_1(USE_ADAPTER_DEFINITION) USER_DEFINED_2(USE_ADAPTER_DEFINITION) USER_DEFINED_3(USE_ADAPTER_DEFINITION); DEFINE_DEFAULTS IMPLICIT_INBOUND_PLU_SUPPORT(YES) DEFAULT_MODE_NAME(BLANK) MAX_MC_LL_SEND_SIZE(32767) DIRECTORY_FOR_INBOUND_ATTACHES(*) DEFAULT_TP_OPERATION(NONQUEUED_AM_STARTED) DEFAULT_TP_PROGRAM_TYPE(BACKGROUND) DEFAULT_TP_CONV_SECURITY_RQD(NO) MAX_HELD_ALERTS(10); START_ATTACH_MANAGER;
Sample Configuration for DLSw+ ROUTERB
ROUTERB, shown in Figure: Example of APPN with DLSw+, is an APPN router that uses the APPN over DLSw+ feature. The VDLC operand on the port statement indicates that APPN is carried over DLSw+. The following is a sample configuration for this router:
hostname routerb ! source-bridge ring-group 100 dlsw local-peer peer-id 10.4.21.1 dlsw remote-peer 0 tcp 10.4.21.3 ! interface Serial2/0 mtu 4096 ip address 10.4.21.1 255.255.255.0 encapsulation frame-relay IETF keepalive 12 no fair-queue frame-relay map llc2 35 ! interface TokenRing0 no ip address ring-speed 16 mac-address 4000.5000.6000 ! appn control-point CISCONET.ROUTERB complete ! appn port VDLC vdlc vdlc 100 vmac 4000.1000.1112 complete !
Sample Configuration for Workstation Attached to ROUTERB
The following is a sample CS/2 configuration for the OS/2 workstation named CISCONET.ENCM2B shown in Figure: Example of APPN with DLSw+. This workstation is attached to the DLSw+ router named ROUTERB:
DEFINE_LOCAL_CP FQ_CP_NAME(CISCONET.ENCM2B) CP_ALIAS(ENCM2C) NAU_ADDRESS(INDEPENDENT_LU) NODE_TYPE(EN) NODE_ID(X'05D00000') NW_FP_SUPPORT(NONE) HOST_FP_SUPPORT(YES) MAX_COMP_LEVEL(NONE) MAX_COMP_TOKENS(0); DEFINE_LOGICAL_LINK LINK_NAME(TORTRB) ADJACENT_NODE_TYPE(LEARN) PREFERRED_NN_SERVER(YES) DLC_NAME(IBMTRNET) ADAPTER_NUMBER(0) DESTINATION_ADDRESS(X'400050006000') ETHERNET_FORMAT(NO) CP_CP_SESSION_SUPPORT(YES) SOLICIT_SSCP_SESSION(YES) NODE_ID(X'05D00000') ACTIVATE_AT_STARTUP(YES) USE_PUNAME_AS_CPNAME(NO) LIMITED_RESOURCE(NO) LINK_STATION_ROLE(USE_ADAPTER_DEFINITION) MAX_ACTIVATION_ATTEMPTS(USE_ADAPTER_DEFINITION) EFFECTIVE_CAPACITY(USE_ADAPTER_DEFINITION) COST_PER_CONNECT_TIME(USE_ADAPTER_DEFINITION) COST_PER_BYTE(USE_ADAPTER_DEFINITION) SECURITY(USE_ADAPTER_DEFINITION) PROPAGATION_DELAY(USE_ADAPTER_DEFINITION) USER_DEFINED_1(USE_ADAPTER_DEFINITION) USER_DEFINED_2(USE_ADAPTER_DEFINITION) USER_DEFINED_3(USE_ADAPTER_DEFINITION); DEFINE_DEFAULTS IMPLICIT_INBOUND_PLU_SUPPORT(YES) DEFAULT_MODE_NAME(BLANK) MAX_MC_LL_SEND_SIZE(32767) DIRECTORY_FOR_INBOUND_ATTACHES(*) DEFAULT_TP_OPERATION(NONQUEUED_AM_STARTED) DEFAULT_TP_PROGRAM_TYPE(BACKGROUND) DEFAULT_TP_CONV_SECURITY_RQD(NO) MAX_HELD_ALERTS(10); START_ATTACH_MANAGER;
|Note:||For more information on DLSw+, see Designing DLSw+ Internetworks.|
Example of Subarea to APPN Migration
This section provides an overview of the implementation and conversion of the SNA network from subarea FEP-based to APPN router-based. It explores the use of DLSw+ as a migration technology from traditional SNA to APPN, and covers the migration steps. The example involves a large insurance company in Europe. The company plans to replace the FEPs with Cisco routers, migrating from subarea to APPN routing.
Figure: SNA FEP-based network shows the company's current SNA network. The network consists of two mainframe sites running four VTAM images with a Communications Management Complex (CMC) host in each data center, as shown in Figure: SNA FEP-based network. In every data center, four NCR Comten FEPs (IBM 3745-compatible) support traffic from multiple regional offices. There are also two NCR Comten FEPs that provide SNA Network Interconnect (SNI) support.
Figure: SNA FEP-based network
There are 22 regional offices across the country. Every regional office has two NCR Comten FEPs installed, one connecting to Data Center 1 and the other connecting to Data Center 2. The remote FEPs have dual Token Rings that are connected via a bridge; duplicate TIC address support is implemented for backup and redundancy. This means that a PU2.0 station can connect to the host through any one of the two FEPs. If one FEP fails, PU2.0 stations can access the host via the other FEP.
In addition to the Token-Ring-attached devices (approximately 15 per regional office), the two FEPs also run NCP Packet-Switching Interface (NPSI), supporting over 200 remotely attached devices via the public X.25 network. The total number of LUs supported per regional office is approximately 1800, with 1500 active LU-LU sessions at any one time. The estimated traffic rate is 15 transactions per second.
The first migration step is to implement Cisco CIP routers at one of the data centers, replacing the channel-attached FEPs. A remote router is then installed in one of the regional offices. The two routers are connected using DLSw+, as shown in Figure: Subarea to APPN migration-phase one.
Figure: Subarea to APPN migration-phase one
As Figure: Subarea to APPN migration-phase one shows, the FEPs at the regional office continue to provide boundary functions to the Token Ring and X.25-attached devices. The two DLSw+ routers handle the traffic between the FEP at Data Center 1 and the FEP at the regional office. SNA COS is preserved in this environment.
After stability of the routers is ensured, the network designer proceeds to the next phase. As Figure: Subarea to APPN migration-phase two shows, this phase involves installation of a second router in Data Center 2 and the regional office. At this point, FEP-to-FEP communications between regional offices and data centers are handled by the routers via DLSw+.
Figure: Subarea to APPN migration-phase two
Continuing with the migration plan, the network designer's next step is to install an additional CIP router in each data center to support traffic between the two data centers. As shown in Figure: Subarea to APPN migration-phase three, the links that are connecting the FEPs in Data Center 1 and Data Center 2 are moved one by one to the routers.
Figure: Subarea to APPN migration-phase three
APPN will be enabled to support the traffic between Data Center 1 and Data Center 2. Eventually, the FEP-based network will become a router-based network. The NCR Comten processors will become obsolete. Two of the NCR Comten processors will be kept to provide SNI support to external organizations. Figure: Subarea to APPN migration-phase four illustrates the new router-based network.
Figure: Subarea to APPN migration-phase four
The communication links that formerly connected the FEPs in the two data centers are now moved to the routers. The FEPs at the data centers can be eliminated. The FEPs at the regional offices are merely providing the boundary functions for dependent LU devices, thus allowing SNA COS to be maintained. The next phase is to migrate the SNA boundary functions support from the FEP to the remote router at the regional office by enabling APPN and DLUR. After this is complete, all the FEPs can be eliminated.
The next step is to migrate from DLSw+ to APPN between the data center routers and the regional office routers. This is done region by region until stability of the network is ensured. As shown in Figure: Subarea to APPN migration-phase five, DLUR is enabled to support the dependent PU devices in the regional offices. X.25 attached dependent PU2.0 devices that are formerly connected to the FEPs using NPSI are supported via Qualified Logical Link Control (QLLC) in the router. QLLC is the standard for SNA encapsulation for X.25.
Figure: Subarea to APPN migration-phase five
Example of APPN/CIP in a Sysplex Environment
This section examines APPN and the CIP routers in a Sysplex (system complex) environment. It provides an overview of the Sysplex environment and its relationship with APPN along with a description of how to use the following three approaches to support the Sysplex environment:
- Sysplex with APPN Using Subarea Routing-Option One
- Sysplex Using Subarea/APPN Routing-Option Two
- Sysplex Using APPN Routing-Option Three
It also describes how APPN provides fault tolerance and load sharing capabilities in the data center.
Sysplex provides a means to centrally operate and manage a group of multiple virtual storage (MVS) systems by coupling hardware elements and software services. Many data processing centers have multiple MVS systems to support their business, and these systems often share data and applications. Sysplex is designed to provide a cost-effective solution to meet a company's expanding requirements by allowing MVS systems to be added and managed efficiently.
A Sysplex environment consists of multiple 9672 CMOS processors, and each CMOS processor presents a VTAM domain. The concept of multiprocessors introduces a problem. Today, users are accustomed to single images. For example, IMS (Information Management System) running on the mainframe can serve the entire organization on a single host image. With the multiprocessor concept, you would not want to instruct User A to establish the session with IMS on System A and User B to establish the session with IMS on System B because IMS might run on either system.
To resolve this, a function called generic resource was created. The generic resource function enables multiple application programs, which provide the same function, to be known and accessed by a single generic name. This means that User A might sometimes get IMS on System A, and sometimes get IMS on System B. Because both systems have access to the same shared data in the Sysplex, this switching of systems is transparent to the users. VTAM is responsible for resolving the generic name and determining which application program is used to establish the session. This function enables VTAM to provide workload balancing by distributing incoming session initiations among a number of identical application programs that are running on different processors.
Generic resource runs only on VTAM with APPN support. In order to achieve session load balancing across the different processors, users must migrate VTAM from subarea SNA to APPN. The rest of this section examines three options for supporting the Sysplex environment.
Sysplex with APPN Using Subarea Routing-Option One
The first option to support the Sysplex environment is to convert the CMC host to a composite network node. Traditionally, the CMC host was the VTAM that owned all of the network's SNA resources. With this approach, the composite network node is used to describe the combination of VTAM and Network Control Program (NCP). This means that VTAM and NCP function together as a single network node. In Figure: CMC composite network node with subarea routing-option one, the CMC host and the FEPs are configured as the composite network node.
Figure: CMC composite network node with subarea routing-option one
The VTAM CMC host owns the FEPs. Each FEP is connected to the 9672 CMOS processors through a parallel channel. Each 9672 CMOS processsor is configured as a migration data host and maintains both an APPN and subarea appearance.
Each migration data host establishes subarea connections to the FEPs using Virtual Route Transmission Group (VRTG), which allows APPN to be transported over traditional subarea routing. CP-CP sessions between the CMC host and the 9672 migration data hosts are established using VRTG. Generic resource function is performed in APPN, but all routing is subarea routing. This is the most conservative way to migrate to a Sysplex.
The disadvantage of this approach is that using subarea routing does not provide dynamic implementation of topology changes in APPN, which is available with APPN connection. If you need to add a CMOS processor, subarea PATH changes to every subarea node are required. Another drawback of this approach is that running APPN over subarea routing introduces complexity to your network.
Sysplex Using Subarea/APPN Routing-Option Two
The second option to support the Sysplex environment is to use subarea/APPN routing. This approach is similar to Option One, which was described in the preceding section. With this second approach, the CMC host and the FEPs are converted to a composite network node, as shown in Figure: CMC composite network node with APPN routing-option two.
Figure: CMC composite network node with APPN routing-option two
As shown in Figure: CMC composite network node with APPN routing-option two, the two 9672 CMOS processors are converted to pure end nodes (EN A and EN B). APPN connections are established between the 9672s and the FEPs. Sessions come into the CMC in the usual way and the CMC does subarea/APPN interchange function. This means that sessions are converted from subarea routing to APPN routing on the links between the FEPs and the 9672s.
A disadvantage of this second approach is that it performs poorly because the FEPs must perform an extra conversion. This approach also requires more NCP cycles and memory. Although this is very easy to configure and it does not require any changes to the basic subarea routing, the cost of the NCP upgrades can be expensive.
Sysplex Using APPN Routing-Option Three
The third option to support the Sysplex environment is to use APPN routing. With this approach, you use DLUR as a front end to the CMC-owned logical units. Figure: Sysplex with DLUR using CIP-option three illustrates this configuration.
Figure: Sysplex with DLUR using CIP-option three
As shown in Figure: Sysplex with DLUR using CIP-option three, this is a pure APPN network with APPN routing only. Each CMOS end-node processor is attached to the DLUR routers through APPN. Note that the DLUR routers could be remote and not directly next to the mainframe computers (for example, there could be intervening routers).
This is the preferred approach for implementing the Sysplex environment for the company used in this sample scenario. The following section provides more details on this sample implementation.
The Company's Network
The company used in this example has a very large IP backbone and a very large SNA network. Today, its multiprotocol and SNA network are separate. The company's goal is to consolidate the traffic across the multiprotocol Internet. The company has chosen IP as its strategic backbone protocol of choice. To transport the SNA traffic, DLSw+ is used.
In the data center, the company plans to support five different IBM Sysplex environments. Its objective is to have the highest degree of redundancy and fault tolerance. The administrators decided not to run APPN throughout their existing multiprotocol network but chose APPN in the data center to provide the required level of redundancy.
Figure: Example of APPN in the data center shows the configuration of the company's data center. The diagram on the top right in this figure is a logical view of one Sysplex environment and how it is connected to the multiprotocol network through the CIP/CSNA routers and the APPN routers. Each CIP/CSNA router has two parallel channel adapters to each Sysplex host (Sysplex 1 and Sysplex 2) through separate ESCON Directors. To meet the company's high availability requirement, this configuration has no single points of failure.
Figure: Example of APPN in the data center
In each Sysplex environment, there are a minimum of two network nodes per Sysplex acting as a DLUS. VTAM NNA is designated as the primary DLUS node. NNB is designated the backup DLUS. The remaining hosts are data hosts configured as end nodes. These end node data hosts use NNA as the network-node server.
There are two CIP routers to support every Sysplex environment and at least two APPN routers running DLUR to provide boundary functions support for remote devices. The traffic is expected to load share across the two CIP routers. Consequently, APPN provides load balancing and redundancy in this environment.
From an APPN standpoint, NNA in Figure: Example of APPN in the data center can be configured as the primary DLUS. NNB can be configured as the backup DLUS. The following is a configuration example for NN1. NN2 would be configured similarly.
! appn control-point CISCONET.NN1 dlus CISCONET.NNA backup-dlus CISCONET.NNB dlur complete
When the primary DLUS host goes out of service for any reason, the DLUR node is disconnected from its serving DLUS. The DLUR node retries the DLUS/DLUR pipe with NNA. If unsuccessful, it tries its backup DLUS.
To achieve load balancing, every DLUR router defines two parallel APPN transmission groups with equal weights to every VTAM host using the following configuration:
! ! Link to VTAM ENA via CIP router 1 ! appn link-station LINK1ENA port FDDI0 lan-dest-address 4000.3000.1001 complete ! ! Link to VTAM ENA via CIP router 2 ! appn link-station LINK2ENA port FDDI0 lan-dest-address 4000.3000.2001 complete ! ! Link to VTAM ENB via CIP router 1 ! appn link-station LINK1ENB port FDDI0 lan-dest-address 4000.3000.1002 complete ! ! Link to VTAM ENB via CIP router 2 ! appn link-station LINK2ENB port FDDI0 lan-dest-address 4000.3000.2002 complete ! ! Link to Primary DLUS NNA via CIP router 1 ! appn link-station LINK1NNA port FDDI0 lan-dest-address 4000.3000.1003 complete ! ! Link to Primary DLUS NNA via CIP router 2 ! appn link-station LINK2NNA port FDDI0 lan-dest-address 4000.3000.2003 complete ! ! Link to Backup DLUS NNB via CIP router 1 ! appn link-station LINK1NNB port FDDI0 lan-dest-address 4000.3000.1004 complete ! ! Link to Backup DLUS NNB via CIP router 2 ! appn link-station LINK2NNB port FDDI0 lan-dest-address 4000.3000.2004 complete
As shown in the preceding configuration, NN1 defines two APPN transmission groups to ENA, ENB, NNA and NNB. There are two channel attachments to each host and each attachment is connected to separate hardware (for example, a CIP card, CIP router, ESCON Director). Reasons to have duplicate hardware include provision for the loss of any physical component; if this happens the host is still accessible using the alternative path.
From an APPN perspective, there are two transmission groups that connect a DLUR router and every host. One transmission group traverses CIP Router 1 and the other traverses CIP Router 2. When one path fails, the APPN transmission group becomes inoperative. The second transmission group provides an alternative route for host connection through the other path.
All the subarea SSCP/PU and SSCP/LU sessions flow on one of the transmission groups between the DLUR router and the primary DLUS host. As for the LU-LU sessions, the two possible routes between the DLUR router and a VTAM host are available. The DLUR router and a VTAM host select one of these two routes at random for the LU-LU sessions. This randomization provides a certain amount of load distribution across the two CIP routers, although it might not necessarily be statistically load balanced.
There are multiple DLUR routers that support downstream SNA devices. The following is a sample configuration for DLUR router NN1:
source-bridge ring-group 100 dlsw local-peer peer-id 172.18.3.111 promiscuous ! interface FDDI0 ip address 172.18.3.111 255.255.255.0 ! appn control-point NETA.NN1 complete ! appn port VDLC1 vdlc vdlc 100 4000.1000.2000 complete
The following is a sample configuration for DLUR router NN2:
source-bridge ring-group 200 dlsw local-peer peer-id 172.18.3.112 promiscuous ! interface FDDI0 ip address 172.18.3.112 255.255.255.0 ! appn control-point NETA.NN2 complete ! appn port VDLC2 vdlc vdlc 200 4000.1000.2000 complete
A workstation gains access to the host through the DLUR router. A workstation defines 4000.1000.2000 as the destination MAC address in the emulation software. This virtual MAC address is defined to every DLUR router. When initiating a connection, a workstation sends an all-routes broadcast Test command frame to the MAC address to which it wants to connect. The remote DLSw+ router sends an explorer frame to its peers. Both NN1 and NN2 respond with ICANREACH. The DLSw+ router is configured to use the load balancing mode. This means that the DLSw+ router caches both NN1 and NN2 as peers that can reach the host. Host sessions are established through NN1 and NN2 in a round robin fashion. This allows the company to spread its SNA traffic over two or more DLUR routers. If NN1 becomes unavailable, sessions that traverse NN1 are disruptive but they can be re-established through NN2 with negligible impact.
This design increases overall availability by using duplicate virtual MAC address on the DLUR router. The dual paths provide the option for a secondary path to be available for use when the primary path is unavailable. Another advantage is that this design allows for easy scaling. For example, when the number of SNA devices increases, buffer memory might become a constraint on the DLUR routers. The company can add a DLUR router to support the increased session load. This topology change does not require any network administration from any remote routers or the data center routers.
Example of APPN with FRAS BNN
This section describes the design considerations when building a large enterprise APPN network. It lists the current technologies that allow the company in this example to build a large APPN network. Each option is discussed in detail. FRAS BNN is chosen as an interim scalability solution to reduce the number of network nodes in the network. This allows the network to scale to meet the company's expanding requirements.
In this example, a government agency has a network that consists of one data center and approximately 100 remote sites. Within the next few years, its network is expected to increase to 500 remote sites.
Figure: Sample APPN network for a government agency shows a simplified version of the agency's current APPN network.
Figure: Sample APPN network for a government agency
The data center consists of 20 mainframe processors from IBM and a variety of other vendors. The IBM mainframes are MVS-based and are running VTAM. They are also configured as NN/DLUS and EN data hosts. No subarea protocol exists in this network. Other non-IBM mainframes are configured as either an EN or LEN node.
The user platform is OS/2 running Communications Server at all the remote sites with connectivity needs to the data center mainframe computers. Initially, there are no any-to-any communication requirements in this network. The applications supported are LU type 2 and LU6.2.
APPN in the Data Center
The host mainframes in Figure: Sample APPN network for a government agency are connected using the external communication adapter (XCA) connection over the 3172 Interconnect Controllers. The non-IBM data hosts (Companies A, B, and C) use the VTAM IBM mainframe as the network-node server. To keep the amount of TDU flows to a minimum, CP-CP sessions exist only between VTAM and the data center routers. There are no CP-CP sessions among the routers located at the data center.
To achieve the optimal route calculation without explicitly defining meshed connection definitions, every end node and network node at the data center is connected to the same connection network. This allows a session to be directly established between two data center resources without traversing the VTAM network node. As Figure: Sample APPN network for a government agency shows, when an LU-LU session between resources at EN3A and Company A's mainframe is set up, the optimal route is directly through the FDDI ring to NN1 and NN3.
To reduce the number of broadcast searches to a maximum of one per resource, VTAM is configured as the CDS in this network. The CDS function is very effective in this network because the resources in the network require access only to resources at the host mainframes in the data center. These host mainframes register their resources with VTAM, which is their network-node server. Consequently, VTAM always has location information for every resource at the data center. This means that VTAM never has to perform LOCATE broadcast searches.
APPN in the Remote Site
The network depicted in Figure: Sample APPN network for a government agency has approximately 30 to 40 CS/2 workstations in every remote site. Every user workstation is configured as an end node. Each end node supports eight independent LU6.2 sessions and four dependent LU sessions. A Cisco router at every location forwards the traffic to the data center. The router's network node function provides the intermediate routing node function for the independent LUs. The DLUR function provides the dependent LU routing function for the dependent LUs.
This network will eventually consist of 500 remote network node routers, 100 data center routers, and eight mainframe computers. Typically, a 600-node APPN network will have scalability issues. The rest of this section examines the following two options that you can use to address scalability issues in an APPN network:
- Implementing border node on VTAM to partition the network into smaller subnets
- Using FRAS BNN to reduce the number of network nodes in the network
Using Border Node on VTAM to Partition the Network into Smaller Subnets
By implementing the concept of border node on VTAM, a peripheral subnetwork boundary is introduced between NN1 and VTAM, and between NN2 and VTAM, as shown in Figure: APPN network with VTAM extended border node.
Figure: APPN network with VTAM extended border node
There would be no topology information exchange between VTAM and the data center NN routers. The eight mainframe computers would be in the same subnet. Every data center router would support multiple access routers and they would form their own subnet. Each subnet is limited to a maximum of 100 network nodes. This configuration would prevent topology information from being sent from one subnet to another, thus allowing the network to scale to over 600 network nodes.
Although this approach addresses the TDU flow issue, there is a considerable loss of functions, however, by configuring VTAM as a border node in this environment. First, two APPN subnetworks cannot be connected through a connection network. LU-LU sessions between resources at Company A's host and remote resources would be set up through an indirect route through the VTAM border node. This is clearly not an optimal route. Second, the central directory server function is lost because the VTAM border node portrays an end node image to NN1. This prevents NN1 from discovering the central directory server in the network.
The next section examines an alternative approach of using FRAS BNN to reduce the number of network nodes in the network.
Using FRAS BNN to Reduce the Number of Network Nodes
Figure: APPN network with FRAS BNN shows how FRAS BNN can be used to reduce the number of network nodes in the company's network. All the server applications are on the mainframe computers and devices only require host access. APPN routing is not essential for this company.
Figure: APPN network with FRAS BNN
Implementing FRAS BNN rather than a full APPN network node on the access routers directly reduces the number of network nodes. This allows the network to scale without the concern of TDU flows. This is proven to be a viable solution for this company for the time being because LAN-to-LAN connectivity is not an immediate requirement. The remote routers can be migrated to support APPN border node when it becomes available.
In this environment, CP-CP sessions are supported over the Frame Relay network. The central directory server and the concept of Connection Network are fully supported. LU-LU sessions can be set up using the direct route without traversing VTAM, as shown in Figure: APPN network with FRAS BNN. The only function that is lost with FRAS BNN is COS for traffic traveling from the remote FRAS BNN router to the data center.
Recall that this article discussed developing the network design and planning a successful migration to APPN. It covered the following topics: | 1 | 7 |
<urn:uuid:d35fce55-a0fc-4d1a-81d4-d0027a5b7f7d> | It has been broken down to 15 chapters.
It’s been broken down into chapters that explain each stage of architectural modeling and expounds on each.
This basically is an intro to architectural visualization and blender 3d.Architectural visualization is best explained as a previewing or showing something that yet exists. Represented using computer generated software from different angles if need be from the same project saving on time and money if we were to go the old fashioned way of technical drawings that was difficult to understand.
We get to understand the importance of detail in architectural visualization, as they make a whole lot of difference when it comes to achieving that level of realism desired of course together with a whole lot of things like lighting, texturing and going a step further and using external rendering systems like YafaRay. That’s where Blender 3D comes in:
Blender 3D is a 15MB open source software that is a very powerful tool in the hands of a skilled artist. Apart of of the mentioned facts concerning it, Blender 3D has been used to create whole 3D Movies like:
Elephant Dreams and Big Buck Bunny which can be downloaded from:
Pretty impressive for a 15MB open-source software.
Blender 3D can be run on different OS s:
- Microsoft Windows XP, Vista or Windows 7
- Mac OS X 10.3 and later
- Iris 6.5 MIPS3
Has very minimal system requirements of course depending on the project undertaken.For simple projects that require minimal detail.These are it’s requirements:
- 3-button mouse
- OpenGL graphics card with 16MB RAM
- 300 MHZ CPU
- 128 MB RAM
- 1024 x 768 free hard disk space
For a maximum performance machine:
- 2GHz quad core CPU
- 2 GB RAM
- 1920 x 1200 pixels display with 24-bit color
- 3-button mouse
- OpenGL graphics card with 128 or 256 MB RAM
Blender 3d doesn’t function on it’s own.It requires other softwares like:
Gimp for post-rendering edits and also in the creating of textures.
CAD pretty much provides the technical drawing for modeling which is very precise.It saves you a lot of time.
Presentation process after editing also requires some tools. In this case Ink-space or OpenOffice.org Impress.
The book further goes on to explain the relation between a CAD software and Blender 3D, making use of libraries to avoid time consuming process of modeling objects instead of just importing from a library.
Here’s a list of sites where one can get free models for Blender 3D.
- http://resources.blogscopia.com — furniture models in the native
Blender file format.
- http://www.e-interiors.net — lots of pictures and free models of
furniture. Most files are in 3DS or DXF file formats.
- http://www.linedstudio.com — more furniture models and scenes already
in Blender native file format.
- http://blender-archi.tuxfamily.org/Models — collection of models to use in Blender for architectural visualization. All are in the Blender native file format.
Blender has has some pretty impressive stuff in it’s gallery, to check them out….here’s the link:
The gallery is updated on a regular basis.
Chapter 2 is quite basic and covers Blender interface.This chapter is for those who are not so familiar with Blender, for those who are familiar with it…..you can jump this chapter.
It explores Blender interface from windows, menus, selecting objects and what not.It basically takes you through how go about about familiarizing yourself with Blender.
After familiarizing yourself with the Blender, in chapter 3 we learn the different aspects of architectural modeling and landscaping.
In Blender, we typically start with a cube.There are different types of objects in Blender like the curves,meta and surfaces.
Curves come in handy when modeling objects with curves, meta are sort clayish typically used in terrain modeling. Surfaces are best in landscape creation.
Coming back to meshes: We have planes, cube, circle,UV sphere, icosphere, cylinder, cone, grid, monkey and a torus.
Now ones commonly used are cubes, planes and circle in architectural modeling.
After having in mind what you want to make most commonly from a cube.We get into the mesh editing detail.To model something complex from a cube you need your way around a mesh editing tools in the menu named Mesh Tools in the editing panels.
In the process of modeling at some point comes the need for precise transformations.Which is possible in Blender. By holding down the control key on the keyboard and moving the mouse, we can use the grid lines as a guide and same applies for scaling and rotations.
We get to explore more tools used in editing like the looping tool that enables you to subdivide objects that would enable you create complex meshes.
We learn of merging of vertices, removing of double vertices. Extruding is major tool to creating new vertices and faces on an object. Extrusion can be done with vertices, faces and lines whichever suits the moment. It reaches a point where extrusion needs to be restricted to certain axis. By pressing either of the axis key,(x,y and z) this restricts transformation to that selected axis. Pretty helpful in some situations.We are given a pretty easy to understand illustration that helps describe this example.
After you are done with the basic concept of a modeled object, there’s times when you need to smoothen or create, maybe you need to create a building with several floors and the long way is pretty time consuming.That’s where modifiers come in.
Commonly used modifiers in architecture are array tool,boolean and subsurf. The subsurf tool smoothens models to desired levels, array modifier…creates copies of objects. In this case buildings with several floors will be done in an instant. Boolean tool gives more options to editing and creating complex meshes.
We further explore more modifier tools,working with groups to simply complex projects and proportional editing.
Modeling for Architecture.
The previous chapters were like a big intro to the actual architectural modeling.
This chapter covers the creation of floors, walls, roofs and other elements.
But first,we just don’t get into it.Some very important point s are made clear about architecture that are vital for architectural modeling.
For one,we get a brief explanation on the differences between architectural modeling and other forms of modeling one of them being scales of models are usually big because buildings by nature are very big.
Another key aspect of architectural modeling is planning.Time is money and hence planning is everything.
Planning is done in two ways depending on who is the author of the project.A project in which you are the author and one that a client is the author.One in which you are the author,there’s freedom to change things as you go along and one in which the author is client,freedom is not there unless permitted by the client.
The book also goes on to explain the importance of modeling with precision by using the background grid as a guide.This is done by holding down the ctrl key while scaling,rotating or moving an object.Another option given is using edge length where the values or length of an edge is visible by activating it in edit mode.
Working with layers which helps in reorganizing your work and simplify what would otherwise be complex project.
We then get into actual practice. Brito starts with walls which really don’t have a specific way of modeling because of different needs in a project.
He further goes on to explain and show how to model rounded corners using simple easy to understand images which would otherwise be tricky, openings as in windows and doors and how to go about them as they can be a real headache if not thought through.
He further explores floors and ceilings, using CAD files that should be in DXF formats which is the only format that Blender can read.
This chapter covers adding of detail to models..this as i said earlier plays a big role in achieving realism.Things like doors,windows,frames,door handles are pretty important.
He expounds on the making of windows having in mind the different kinds of windows like the double-hang sash window and skylight.One thing that stands out here is the using of measurements in the creating of a window so as to achieve the desired level of realism.The level of detail to be used is always determined by the location of the camera.A camera that is far away requires less detail on the object being modeled while that is close requires more detail.
Then he gets into doors…i have to admit,detail pretty much excites me.The explanation of windows and doors is pretty cool.
After we are done with the building,windows and doors comes furniture that further increases the level of realism.
As explained,furniture can be classified into two categories, internal and external furniture.
Internal is for what occupies inside a building like beds,furniture and external cars,fence and fountains.Allan explains as to why detail is required in some aspects of modeling of furniture.
We get to learn the option of either using a library or modeling something yourself.For reference images and free models..these sites will be of importance:
We are shown how to append models from external libraries by going to File menu, and accessing the append or link option.A shortcut to that is Shift + F1.
He explains how to model a sofa and a chair which is actually quite simple and after all that you have achieved something that will add some level of realism to your project.
After all the modeling is done comes materials.Chapter 7 covers how to create,organize and apply material to objects.Allan shows shows us how to go about doing that.
We need to have an understanding of how materials work in order to be able to achieve realism as well.Obviously that realism is achieved by understanding a lot of things as including materials.
What makes wood look like wooden surface and metal surface like a metallic surface? It’s all reflection.How light reflects on those surfaces.
An understanding of this will be big in 3d.
We get to explore working with colors, gradients, shaders which are used to determine how a material reacts with light.
Ray tracing is an option of achieving a reflected surface feel and of course these things models appear more realistic.
We learn how to create glass,,mirrors, glossy reflections ,retraced shadows and glossy transparency.
One thing i must mention is about wireframe.In architecture you might be required to represent a model in a structured manner for more understanding or to make a big impression and that’s where wireframes come in.In this mode one can see the structured form of a model hence understanding and explanation would be easier.
Materials was the first stage and only contributed a certain level of realism.Textures…that’s where the magic is!.Textures are described in an understandable easy manner as image files of surfaces 0f real life objects like wood, stone, glass and what not.Considering that some of these surfaces can be difficult to find or you might have to buy spend a lot of time on the internet searching for them, libraries come in handy.Creating your own textures and storing them in your library will be of great advantage to you in the future projects because most them are commonly used over and over in different projects.
This book explains on the different kinds of texture in Blender. Procedural and non-Procedural textures and how they are used in Blender.We further learn about texture libraries, how to apply textures, mapping both the normal mapping and UV mapping.UV mapping is an elaborate way of placing textures in their exact locations.
Free textures websites:
Chapter 9 expounds on UV Mapping.
This chapter is on lighting.LIGHTING!
You need to understand lighting well to be able to again get realistic results.Blender has different kinds of lights and all are unique to different environments. Brito tells of the different lights that Blender has and where to best use them.Whenever we have light there must be shadows, so how do you get soft shadows and how do you get sharp shadows, under what kind of lights.
We are given a small exercise that better explains how you understand this.
Chapter 11 goes on to delve into Advanced lighting based on two main techniques. Radiosity and Ambient occlusion and in what circumstances they are used.
Chapter 12 is about YafaRay. An external renderer that has features that normal blender rendering machine does not have such global illumination and raytracing. YafaRay is really good when you want to go to the next level in renderings despite the downside of higher machine resources being consumed hence longer render times.
Allan take us through the installation of YafaRay how to run it with Blender simultaneously, understanding the YafaRay setup, materials used in YafaRay and render methods.
So far all the chapters have covered Planning a project,modeling,handling materials and textures, and lighting.Chapter 14 is animation.Animation in architectural visualization most of the time means walk-throughs.
Just like a typical project you have to plan the animation process otherwise you might end up miscalculating the time set up for an animation buy either being less or just too much and by the time the rendering time is over after a couple of hours it will all be useless.
Allan explains the aspects involved in the process of animation like key frames, managing key frames, IPO curves and more.By the end of the chapter you should have a good understanding of animation in architecture.
At this stage the project is done unless maybe after all that time it has taken to render you find a flaw in the rendered image.Then what…..Gimp.Its an editing software for images that would be a really big help in terms adjusting the rendered images and also creation of textures.
This chapter explores Gimp.
So far we have been operating on Blender 3D 2.49b.Chapter 15 is about Blender 3D 2.50, a new version of Blender.It’s out already but i doubt a lot have used it much including me.It’s interface is different from the other Blender versions so you need to be shown around to know where what is but the shortcuts are pretty much the same.
We are told on how to manage windows in 2.5,how to go around modeling in the version.I have to admit i had to sweat a bit to get to know how to create models like a cube a plane and what not, i have to remind myself from time to time.
The differences between Blender 2.5 and 2.49b are pretty clear once you go through this chapter or if you already have, apart from the obvious interface…which is pretty cool.I like this one better, it’s more welcoming and you get that interest in learning it.
An area with many changes is deformations and animation,where anything can practically be animated.
After going through the book,there’s little difference from this book and the previous one.The cover itself is pretty much the same as the other one and i mistook it for being the same book as the other one hence giving someone the assumption that there’s nothing new to learn.
For those who have read the first book,you probably already understand what wasn’t covered in the previous book.I’d say this book is for those who want to learn and understand architecture and haven’t read the first book. There’s added information about architecture that is vital to understanding architecture,buildings and scenery.
Book Review: Blender3d_Incredible Machines
By Chepkech Kevin
Incredible Machines is a 303 page book written by Allan Brito and published by Packtpub Publishers (packtpub.com).He has written two others books, Blender3d: Architecture, Buildings and Scenery and Blender3d-Guia do Usuario. He covers the use of Blender and other tools for architectural visualization at his website http://www.blender3darchitect.com where he can be reached.
From the name, the book’s about making of machines, sci-fi not of this world unless maybe in your imagination. He covers modeling, rendering and animation of these machines and all the steps that lead to its completion such as UV mapping, dealing with particles and their animation, using curves and so forth. It covers three different projects starting with the most simple to a more complex one (transforming a robot).
This book is for game developers, 3d artists and product designers who strive for realistic images, 3D models and videos and also for those interested in creating realistic models using YafaRay and LuxRender.
Blender3d 2.49 is the version used in this book and some of minimum requirements given are:
Open GL Graphics Card with 16 MB RAM
300 MHz CPU
128 MB RAM
1024 x 768 pixels display with 16-bit color
20 MB free hard disk space
However, if you really want to get maximum performance, there is a more powerful configuration:
Open GL Graphics Card with 128 or 256 MB RAM
2 GHz dual core CPU
2 GB RAM
1920 x 1200 pixels display with 24-bit color.
There isn’t much to say about the software, only that you can run Blender on almost any operating system available. The following is the list of systems that support Blender:
Windows 98, ME, 2000, XP, or Vista
Mac OS X 10.2 and later
Linux i386, x86_64/amd64 or PPC
FreeBSD 6.2 i386 and later
Irix 6.5 mips3
Solaris 2.8 sparc.
Chapter one of the book gives a brief history of blender, where and how to get it. Blender is an open source software available for free at the blender foundation website http://www.blender.org .It goes on to introduce other applications that run hand in hand with blender like YafaRay (http://www.yafaray.org) and gimp (http://www.gimp.org) which are also available for free from their respective websites.
He goes on to introduce incredible machines, what it means and how the name came about and how he has organized the book to tackle the three projects extensively.
Chapter two to five
Deals with the first project. There’s a brief explanation on why he chose that particular object as the fisrt project and the workflow for modeling it. He uses subdivision modeling but he explains why not polygon modeling and also tells us of other modeling techniques such as nurbs modeling and spline modeling and where they are suitable. I found the language to be an easy read and straight to the point. He introduces tools used to add detail such as:
How to use hooks in the alignment of objects
Spin tool to close up an object
Adding creases and rounded details
And finally using YafaRay to create an environment and setting up lights and adding materials to achieve that ultimate realism.
Chapter 6to 10
This is the second project bit more complex than the first one. He again gives the project workflow and goes ahead to model the object. Start by making the general shape of the machine and later on adding detail in such a way that you can’ miss anything. He uses curves to add cables and wires to the machine (I was so jazzed by this).
He tackles uv mapping and the use of blender particles.
In the rendering of the machine he further expounds on the use YafaRay and creates an impressive environment for the spacecraft as well as material.
Chapter 11 to 16
I have got to admit that I expected like a lesson on how to transform Optimus Prime from a truck to machine but that’s not what I got. If I did get that then I probably would be lost right now because it requires a lot of detail. Once I understand this fully then I would be in a position to give Optimus a try.
This is the most complex of all the machines and Allan begins by telling us what it is and how he goes about tackling it. The challenges ahead from textures and materials to animating it. In this last project he uses LuxRender instead and expounds on the pros and cons of LuxRender as compared to YafaRay and why he used LuxRender this time.
As he adds detail to the machine I got to discover functions of some modifiers I didn’t know what they did like the array modifier. He provides valuable information concerning LuxRender from where to find it (http://www.luxrender.net), and the process of installing it. I got to learn new things concerning an unbiased render engine.
I found the chapter on animating the robot long and boring, I have to admit animation isn’t one of my strong points being an impatient person I’d like to see fast results but from those long tutorials I have followed through up to the end have been worth it. The knowledge gained is very vital, the same with this chapter. He uses hierarchies, controllers, armatures and helpers. One of the functions of LuxRender is it can be used to edit and fine-tune an image within LuxRender which is so cool and afterwords using gimp to clean and remove the noise on rendered images.
There were a few grammatical errors although I got what the message it would be unfortunate for someone who wouldn’t and probably miss something important.
I found it pretty good and straight forward. Aspiring game developers and 3d artists will find this a book a good stepping stone to learning the first steps of game animation and achieving the perfect realism using the different render engines. | 1 | 2 |
Subsets and Splits